It is estimated that 180,000 Americans live with quadriplegia, and each year, an additional ~18,000 suffer a paralyzing spinal cord injury. We live in a digital society where much of our work, entertainment, and social lives rely heavily on our use of computers and smart devices. People with quadriplegia often find that their needs to engage seamlessly with the digital world go unmet, leading to decreased independence, isolation, and financial challenges. Our goal is to provide a high-performance interface that will enhance the control of digital devices for people with quadriplegia, unlocking their personal and professional potential.
The first step toward this goal was achieved just over 100 days ago at Barrow Neurological Institute in Phoenix Arizona, where Noland Arbaugh, the first participant of the PRIME Study*, received his Neuralink implant (Link). As noted in our last blog post, the surgery went extremely well, and he was able to go home the following day.
The aim of the PRIME Study is to demonstrate that the Link is safe and useful in daily life. We will monitor its technical performance remotely and quantify any benefit it provides by timing the duration of independent use and assessing how it affects study participants’ quality of life. — Read More
Monthly Archives: May 2024
OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn
OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.
OpenAI’s usage policies curently prohibit sexually explicit or even suggestive materials, but a “commentary” note on part of the Model Spec related to that rule says the company is considering how to permit such content. — Read More
Back to the Hype: An Update on How Cybercriminals Are Using GenAI
In August 2023, we published an article detailing how criminals were using or planning to use generative AI (GenAI) capabilities to help develop, spread, and improve their attacks. Given the fast-paced nature of AI evolution, we decided to circle back and see if there have been developments worth sharing since then. Eight months might seem short, but in the fast-growing world of AI, this period is an eternity.
Compared to eight months ago, our conclusions have not changed: While criminals are still taking advantage of the possibilities that ChatGPT and other LLMs offer, we remain skeptical of the advanced AI-powered malware scenarios that several media outlets seemed to dread back then. We want to explore the matter further and pick apart the details that make this a fascinating topic.
We also want to address pertinent questions on the matter. Have there been any new criminal LLMs beyond those reported last year? Are criminals offering ChatGPT-like capabilities in hacking software? How are deepfakes being offered on criminal sites?
In sum, however, criminals are still lagging behind on AI adoption. We discuss our observations and findings in the following sections. — Read More
Anduril Reveals ‘Pulsar’ family of AI-Learning Electronic Warfare Systems
On May 6, the defense company Anduril Industries revealed it had secretly developed a family of AI-enhanced electronic warfare systems called Pulsar already in operational use on multiple continents for —including two combat zones and with clients including the U.S. military.
… Pulsar is described as leveraging AI to recognize and adapt to never-before-seen threats, a traditional Achilles heel of AI. Like the Borg in Star Trek, it’s intended to rapidly identify and analyze unfamiliar threats (anomalous signals) and harness AI to rapidly devise a countermeasure. The resulting new threat data and countermeasures are then distributed across the network of Pulsar systems. — Read More
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Large language models (LLMs) with hundreds of billions of parameters have sparked a new wave of exciting AI applications. However, they are computationally expensive at inference time. Sparsity is a natural approach to reduce this cost, but existing methods either require costly retraining, have to forgo LLM’s in-context learning ability, or do not yield wall-clock time speedup on modern hardware. We hypothesize that contextual sparsity, which are small, input-dependent sets of attention heads and MLP parameters that yield approximately the same output as the dense model for a given input, can address these issues. We show that contextual sparsity exists, that it can be accurately predicted, and that we can exploit it to speed up LLM inference in wall-clock time without compromising LLM’s quality or in-context learning ability. Based on these insights, we propose DejaVu, a system that uses a low-cost algorithm to predict contextual sparsity on the fly given inputs to each layer, along with an asynchronous and hardware-aware implementation that speeds up LLM inference. We validate that DejaVu can reduce the inference latency of OPT-175B by over 2X compared to the state-of-the-art FasterTransformer, and over 6X compared to the widely used Hugging Face implementation, without compromising model quality. The code is available at this https URL. — Read More
UKRAINE IS RIDDLED WITH LAND MINES. DRONES AND AI CAN HELP
EARLY ON A JUNE morning in 2023, my colleagues and I drove down a bumpy dirt road north of Kyiv in Ukraine. The Ukrainian Armed Forces were conducting training exercises nearby, and mortar shells arced through the sky. We arrived at a vast field for a technology demonstration set up by the United Nations. Across the 25-hectare field—that’s about the size of 62 American football fields—the U.N. workers had scattered 50 to 100 inert mines and other ordnance. Our task was to fly our drone over the area and use our machine learning software to detect as many as possible. And we had to turn in our results within 72 hours.
The scale was daunting: The area was 10 times as large as anything we’d attempted before with our drone demining startup, Safe Pro AI. My cofounder Gabriel Steinberg and I used flight-planning software to program a drone to cover the whole area with some overlap, taking photographs the whole time. It ended up taking the drone 5 hours to complete its task, and it came away with more than 15,000 images. Then we raced back to the hotel with the data it had collected and began an all-night coding session.
We were happy to see that our custom machine learning model took only about 2 hours to crunch through all the visual data and identify potential mines and ordnance. But constructing a map for the full area that included the specific coordinates of all the detected mines in under 72 hours was simply not possible with any reasonable computational resources. The following day (which happened to coincide with the short-lived Wagner Group rebellion), we rewrote our algorithms so that our system mapped only the locations where suspected land mines were identified—a more scalable solution for our future work. — Read More
How LLMs Work, Explained Without Math
I’m sure you agree that it has become impossible to ignore Generative AI (GenAI), as we are constantly bombarded with mainstream news about Large Language Models (LLMs). Very likely you have tried ChatGPT, maybe even keep it open all the time as an assistant.
A basic question I think a lot of people have about the GenAI revolution is where does the apparent intelligence these models have come from. In this article, I’m going to attempt to explain in simple terms and without using advanced math how generative text models work, to help you think about them as computer algorithms and not as magic. — Read More
New Microsoft AI model may challenge GPT-4 and Google Gemini
Microsoft is working on a new large-scale AI language model called MAI-1, which could potentially rival state-of-the-art models from Google, Anthropic, and OpenAI, according to a report by The Information. This marks the first time Microsoft has developed an in-house AI model of this magnitude since investing over $10 billion in OpenAI for the rights to reuse the startup’s AI models. OpenAI’s GPT-4 powers not only ChatGPT but also Microsoft Copilot. — Read More
Microsoft launches AI chatbot for spies
Microsoft has introduced a GPT-4-based generative AI model designed specifically for US intelligence agencies that operates disconnected from the Internet, according to a Bloomberg report. This reportedly marks the first time Microsoft has deployed a major language model in a secure setting, designed to allow spy agencies to analyze top-secret information without connectivity risks—and to allow secure conversations with a chatbot similar to ChatGPT and Microsoft Copilot. But it may also mislead officials if not used properly due to inherent design limitations of AI language models. — Read More