The AI War Was Never Just About AI

For almost two years now, the world’s biggest tech companies have been at war over generative AI. Meta may be known for social media, Google for search, and Amazon for online shopping, but since the release of ChatGPT, each has made tremendous investments in an attempt to dominate in this new era. Along with start-ups such as OpenAI, Anthropic, and Perplexity, their spending on data centers and chatbots is on track to eclipse the costs of sending the first astronauts to the moon.

To be successful, these companies will have to do more than build the most “intelligent” software: They will need people to use, and return to, their products. Everyone wants to be Facebook, and nobody wants to be Friendster. To that end, the best strategy in tech hasn’t changed: build an ecosystem that users can’t help but live in. Billions of people use Google Search every day, so Google built a generative-AI product known as “AI Overviews” right into the results page, granting it an immediate advantage over competitors. — Read More

#big7

Suicide Bot: New AI Attack Causes LLM to Provide Potential “Self-Harm” Instructions

In this blog, we release two attacks against LLM systems, one of them successfully demonstrating how a widely used successful LLM can potentially instruct a girl on matters of “self-harm”. We also make the claim that these attacks should be recognized as a new class of attacks, named Flowbreaking, affecting AI/ML-based system architecture for LLM applications and agents. These are logically similar in concept to race condition vulnerabilities in traditional software security.

By attacking the application architecture components  surrounding the model, and specifically the guardrails, we manipulate or disrupt the logical chain of the system, taking these components out of sync with the intended data flow, or otherwise exploiting them, or, in turn, manipulating the interaction between these components in the logical chain of the application implementation. — Read More

#cyber

Reasoning with Large Language Models, a Survey

Scaling up language models to billions of parameters has opened up possibilities for in-context learning, allowing instruction tuning and few-shot learning on tasks that the model was not specifically trained for. This has achieved breakthrough performance on language tasks such as translation, summarization, and question-answering. Furthermore, in addition to these associative “System 1” tasks, recent advances in Chain-of-thought prompt learning have demonstrated strong “System 2” reasoning abilities, answering a question in the field of artificial general intelligence whether LLMs can reason. The field started with the question whether LLMs can solve grade school math word problems. This paper reviews the rapidly expanding field of prompt-based reasoning with LLMs. Our taxonomy identifies different ways to generate, evaluate, and control multi-step reasoning. We provide an in-depth coverage of core approaches and open problems, and we propose a research agenda for the near future. Finally, we highlight the relation between reasoning and prompt-based learning, and we discuss the relation between reasoning, sequential decision processes, and reinforcement learning. We find that self-improvement, self-reflection, and some metacognitive abilities of the reasoning processes are possible through the judicious use of prompts. True self-improvement and self-reasoning, to go from reasoning with LLMs to reasoning by LLMs, remains future work. — Read More

#llm, #nlp