There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.
The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts, and the public declarations issued about AI are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.
The result is a cacophony of coded language, contradictory views, and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy, and even our daily lives. — Read More
Tag Archives: Cyber
National Security Agency is starting an artificial intelligence security center
The National Security Agency is starting an artificial intelligence security center — a crucial mission as AI capabilities are increasingly acquired, developed and integrated into U.S. defense and intelligence systems, the agency’s outgoing director announced Thursday.
Army Gen. Paul Nakasone said the center would be incorporated into the NSA’s Cybersecurity Collaboration Center, where it works with private industry and international partners to harden the U.S. defense-industrial base against threats from adversaries led by China and Russia. — Read More
Chinese social media campaigns are successfully impersonating U.S. voters, Microsoft warns
Chinese state-aligned influence and disinformation campaigns are impersonating U.S. voters and targeting political candidates on multiple social media platforms with improved sophistication, Microsoft said in a threat analysis report Thursday.
Chinese Communist Party-affiliated “covert influence operations have now begun to successfully engage with target audiences on social media to a greater extent than previously observed,” according to the report, which focused on the rise in “digital threats from East Asia.” — Read More
Google’s AI Red Team: the ethical hackers making AI safer
Today, we’re publishing information on Google’s AI Red Team for the first time.
Last month, we introduced the Secure AI Framework (SAIF), designed to help address risks to AI systems and drive security standards for the technology in a responsible manner.
To build on this momentum, today, we’re publishing a new report to explore one critical capability that we deploy to support SAIF: red teaming. We believe that red teaming will play a decisive role in preparing every organization for attacks on AI systems and look forward to working together to help everyone utilize AI in a secure way. The report examines our work to stand up a dedicated AI Red Team and includes three important areas: 1) what red teaming in the context of AI systems is and why it is important; 2) what types of attacks AI red teams simulate; and 3) lessons we have learned that we can share with others. — Read More
Why Amazon, Google and Other Tech Giants Are Flouting Some New Government Cybersecurity Recommendations
Major technology companies are resisting the Biden administration’s push to make basic security features free and automatic in products like their popular cloud platforms, forgoing changes that could neutralize many cyberattacks.
Amazon, Google, Microsoft, IBM, and Oracle are among the tech giants defying elements of recently issued guidance from the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency that seeks to encourage the adoption of these so-called “secure-by-default” features. — Read More
A New Kill Chain Approach to Disrupting Online Threats
If the internet is a battlefield between threat actors and the investigators who defend against them, that field has never been so crowded. The threats range from hacking to scams, election interference to harassment. The people behind them include intelligence services, troll farms, hate groups, and commercial companies of cyber mercenaries. The defenders include investigators at tech companies, universities, think tanks, government agencies, and media outlets.
… As long as the defenders remain siloed, without a common framework to understand and discuss threats, there is a risk that blended and cross-platform operations like these will be able to find a weak point and exploit it.
To help break down those siloes between investigators in different fields, companies, and institutions, we have developed a framework to analyze, map, and disrupt many different sorts of online threats: a kill chain for online operations. — Read More
Perfectly Secure Steganography Using Minimum Entropy Coupling
Steganography is the practice of encoding secret information into innocuous content in such a manner that an adversarial third party would not realize that there is hidden meaning. While this problem has classically been studied in security literature, recent advances in generative models have led to a shared interest among security and machine learning researchers in developing scalable steganography techniques. In this work, we show that a steganography procedure is perfectly secure under Cachin (1998)’s information theoretic-model of steganography if and only if it is induced by a coupling. Furthermore, we show that, among perfectly secure procedures, a procedure is maximally efficient if and only if it is induced by a minimum entropy coupling. These insights yield what are, to the best of our knowledge, the first steganography algorithms to achieve perfect security guarantees with non-trivial efficiency; additionally, these algorithms are highly scalable. To provide empirical validation, we compare a minimum entropy coupling-based approach to three modern baselines — arithmetic coding, Meteor, and adaptive dynamic grouping — using GPT-2, WaveRNN, and Image Transformer as communication channels. We find that the minimum entropy coupling-based approach achieves superior encoding efficiency, despite its stronger security constraints. In aggregate, these results suggest that it may be natural to view information-theoretic steganography through the lens of minimum entropy coupling. — Read More
People Are Pirating GPT-4 By Scraping Exposed API Keys
Why pay for $150,000 worth of OpenAI access when you could just steal it?
People on the Discord for the r/ChatGPT subreddit are advertising stolen OpenAI API tokens that have been scraped from other peoples’ code, according to chat logs, screenshots and interviews. People using the stolen API keys can then implement GPT-4 while racking up usage charges to the stolen OpenAI account. — Read More
Introducing Charlotte AI, CrowdStrike’s Generative AI Security Analyst: Ushering in the Future of AI-Powered Cybersecurity
… Charlotte AI is a new generative AI security analyst that uses the world’s highest-fidelity security data and is continuously improved by a tight feedback loop with CrowdStrike’s industry-leading threat hunters, managed detection and response operators, and incident response experts. This is the first offering built using our Charlotte AI engine and will help users of all skill levels improve their ability to stop breaches while reducing security operations complexity. Customers can ask questions in plain English and dozens of other languages to receive intuitive answers from the CrowdStrike Falcon platform.
Currently available in private customer preview, Charlotte AI initially addresses three common use cases:
- Democratizing Cybersecurity – Every User Becomes a Power User: With Charlotte AI, everyone from the IT helpdesk to executives like CISOs and CIOs can quickly ask straightforward questions such as “What is our risk level against the latest Microsoft vulnerability?” to directly gain real-time, actionable insights, drive better risk-based decision making and accelerate time to response.
- Elevate Security Analyst Productivity with AI-Powered Threat Hunting: Charlotte AI will empower less experienced IT and security professionals to make better decisions faster, closing the skills gap and reducing response time to critical incidents. New security analysts, such as a Tier 1 member of a SOC, will now be able to operate the CrowdStrike Falcon platform like a more advanced SOC analyst.
- The Ultimate Force Multiplier for Security Experts: Charlotte AI will enable the most experienced security experts to automate repetitive tasks like data collection, extraction and basic threat search and detection while making it easier to perform more advanced security actions. It will also accelerate enterprise-wise XDR use cases across every attack surface and third-party product, directly from the CrowdStrike Falcon platform. Hunting and remediating threats across the organization will be faster and easier by asking simple natural language queries.
#cyber
DarkBERT: A Language Model for the Dark Side of the Internet
Recent research has suggested that there are clear differences in the language used in the Dark Web compared to that of the Surface Web. As studies on the Dark Web commonly require textual analysis of the domain, language models specific to the Dark Web may provide valuable insights to researchers. In this work, we introduce DarkBERT, a language model pretrained on Dark Web data. We describe the steps taken to filter and compile the text data used to train DarkBERT to combat the extreme lexical and structural diversity of the Dark Web that may be detrimental to building a proper representation of the domain. We evaluate DarkBERT and its vanilla counterpart along with other widely used language models to validate the benefits that a Dark Web domain specific model offers in various use cases. Our evaluations show that DarkBERT outperforms current language models and may serve as a valuable resource for future research on the Dark Web. — Read More