The Operational Risks of AI in Large-Scale Biological Attacks

The rapid advancement of artificial intelligence (AI) has far-reaching implications across multiple domains, including its potential to be applied in the development of advanced biological weapons. The speed at which AI technologies are evolving often surpasses the capacity of government regulatory oversight, leading to a potential gap in existing policies and regulations. Previous biological attacks that failed because of a lack of information might succeed in a world in which AI tools have access to all of the information needed to bridge that information gap.

The authors of this report look at the emerging issue of identifying and mitigating the risks posed by the misuse of AI—specifically, large language models (LLMs)—in the context of biological attacks. They present preliminary findings of their research and examine future paths for that research as AI and LLMs gain sophistication and speed. — Read More

#cyber

The Rise of AI Titans: China’s Hi-Tech Cities Forge a Path to the Future

In the vast landscape of technological advancements, China’s AI cities stand as a testament to the transformative power of innovation. These cities, equipped with the latest in artificial intelligence, are not just reshaping urban landscapes but are also fundamentally altering the way residents live, work, and play. — Read More

#china-ai

Minds of machines: The great AI consciousness conundrum

David Chalmers was not expecting the invitation he received in September of last year. As a leading authority on consciousness, Chalmers regularly circles the world delivering talks at universities and academic meetings to rapt audiences of philosophers—the sort of people who might spend hours debating whether the world outside their own heads is real and then go blithely about the rest of their day. This latest request, though, came from a surprising source: the organizers of the Conference on Neural Information Processing Systems (NeurIPS), a yearly gathering of the brightest minds in artificial intelligence. 

… Chalmers was an eminently sensible choice to speak about AI consciousness. He’d earned his PhD in philosophy at an Indiana University AI lab, where he and his computer scientist colleagues spent their breaks debating whether machines might one day have minds. In his 1996 book, The Conscious Mind, he spent an entire chapter arguing that artificial consciousness was possible. 

If he had been able to interact with systems like LaMDA and ChatGPT back in the ’90s, before anyone knew how such a thing might work, he would have thought there was a good chance they were conscious, Chalmers says. But when he stood before a crowd of NeurIPS attendees in a cavernous New Orleans convention hall, clad in his trademark leather jacket, he offered a different assessment. Yes, large language models—systems that have been trained on enormous corpora of text in order to mimic human writing as accurately as possible—are impressive. But, he said, they lack too many of the potential requisites for consciousness for us to believe that they actually experience the world. — Read More

#human

Multi-modal prompt injection image attacks against GPT-4V

GPT4-V is the new mode of GPT-4 that allows you to upload images as part of your conversations. It’s absolutely brilliant. It also provides a whole new set of vectors for prompt injection attacks. — Read More

#cyber

How roboticists are thinking about generative AI

The topic of generative AI comes up frequently in my newsletter, Actuator. I admit that I was a bit hesitant to spend more time on the subject a few months back. Anyone who has been reporting on technology for as long as I have has lived through countless hype cycles and been burned before. Reporting on tech requires a healthy dose of skepticism, hopefully tempered by some excitement about what can be done.

This time out, it seemed generative AI was waiting in the wings, biding its time, waiting for the inevitable cratering of crypto. As the blood drained out of that category, projects like ChatGPT and DALL-E were standing by, ready to be the focus of breathless reporting, hopefulness, criticism, doomerism and all the different Kübler-Rossian stages of the tech hype bubble. — Read More

#robotics

MemGPT: Towards LLMs as Operating Systems

Large language models (LLMs) have revolutionized AI, but are constrained by limited context windows, hindering their utility in tasks like extended conversations and document analysis. To enable using context beyond limited context windows, we propose virtual context management, a technique drawing inspiration from hierarchical memory systems in traditional operating systems that provide the appearance of large memory resources through data movement between fast and slow memory. Using this technique, we introduce MemGPT (Memory-GPT), a system that intelligently manages different memory tiers in order to effectively provide extended context within the LLM’s limited context window, and utilizes interrupts to manage control flow between itself and the user. We evaluate our OS-inspired design in two domains where the limited context windows of modern LLMs severely handicaps their performance: document analysis, where MemGPT is able to analyze large documents that far exceed the underlying LLM’s context window, and multi-session chat, where MemGPT can create conversational agents that remember, reflect, and evolve dynamically through long-term interactions with their users. We release MemGPT code and data for our experiments at this https URL. — Read More

#nlp

LLAVA: The AI That Microsoft Didn’t Want You to Know About!

Read More

#devops, #videos

The Guide To LLM Evals: How To Build and Benchmark Your Evals

How to build and run LLM evals — and why you should use precision and recall when benchmarking your LLM prompt template

Large language models (LLMs) are an incredible tool for developers and business leaders to create new value for consumers. They make personal recommendations, translate between unstructured and structured data, summarize large amounts of information, and do so much more.

As the applications multiply, so does the importance of measuring the performance of LLM-based applications. This is a nontrivial problem for several reasons: user feedback or any other “source of truth” is extremely limited and often nonexistent; even when possible, human labeling is still expensive; and it is easy to make these applications complex.

This complexity is often hidden by the abstraction layers of code and only becomes apparent when things go wrong. One line of code can initiate a cascade of calls (spans). Different evaluations are required for each span, thus multiplying your problems. For example, the simple code snippet below triggers multiple sub-LLM calls. — Read More

#accuracy, #devops

China sets stricter rules for training generative AI models

The draft regulations emphasize that data subject to censorship on the Chinese internet should not serve as training material for these models.

China has released draft security regulations for companies providing generative artificial intelligence (AI) services, encompassing restrictions on data sources used for AI model training.

On Wednesday, Oct. 11, the proposed regulations were released by the National Information Security Standardization Committee, comprising representatives from the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology and law enforcement agencies. — Read More

#china-ai

This is the largest map of the human brain ever made

Researchers have created the largest atlas of human brain cells so far, revealing more than 3,000 cell types — many of which are new to science. The work, published in a package of 21 papers today in ScienceScience Advances and Science Translational Medicine, will aid the study of diseases, cognition and what makes us human, among other things, say the authors.

The enormous cell atlas offers a detailed snapshot of the most complex known organ. “It’s highly significant,” says Anthony Hannan, a neuroscientist at the Florey Institute of Neuroscience and Mental Health in Melbourne, Australia. Researchers have previously mapped the human brain using techniques such as magnetic resonance imaging, but this is the first atlas of the whole human brain at the single-cell level, showing its intricate molecular interactions, adds Hannan. “These types of atlases really are laying the groundwork for a much better understanding of the human brain.” — Read More

#human