The way you talk can reveal a lot about you—especially if you’re talking to a chatbot. New research reveals that chatbots like ChatGPT can infer a lot of sensitive information about the people they chat with, even if the conversation is utterly mundane.
The phenomenon appears to stem from the way the models’ algorithms are trained with broad swathes of web content, a key part of what makes them work, likely making it hard to prevent. “It’s not even clear how you fix this problem,” says Martin Vechev, a computer science professor at ETH Zurich in Switzerland who led the research. “This is very, very problematic.”
Vechev and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users—including their race, location, occupation, and more—from conversations that appear innocuous. — Read More
Daily Archives: October 18, 2023
Memory Capabilities of ERNIE 4.0 | Baidu World 2023
The Operational Risks of AI in Large-Scale Biological Attacks
The rapid advancement of artificial intelligence (AI) has far-reaching implications across multiple domains, including its potential to be applied in the development of advanced biological weapons. The speed at which AI technologies are evolving often surpasses the capacity of government regulatory oversight, leading to a potential gap in existing policies and regulations. Previous biological attacks that failed because of a lack of information might succeed in a world in which AI tools have access to all of the information needed to bridge that information gap.
The authors of this report look at the emerging issue of identifying and mitigating the risks posed by the misuse of AI—specifically, large language models (LLMs)—in the context of biological attacks. They present preliminary findings of their research and examine future paths for that research as AI and LLMs gain sophistication and speed. — Read More