The scientific revolution has increased our understanding of the world immensely and improved our lives immeasurably. Now, many argue that science as we know it could be rendered passé by artificial intelligence.
… Today, AI is being increasingly integrated into scientific discovery to accelerate research, helping scientists generate hypotheses, design experiments, gather and interpret large datasets, and write papers. But the reality is that science and AI have little in common and AI is unlikely to make science obsolete. The core of science is theoretical models that anyone can use to make reliable descriptions and predictions. … The core of AI, in contrast, is data mining. … However, without an underlying causal explanation, we don’t know whether a discovered pattern is a meaningful reflection of an underlying causal relationship or meaningless serendipity. — Read More
Daily Archives: July 8, 2024
Perplexity’s grand theft AI
In every hype cycle, certain patterns of deceit emerge. In the last crypto boom, it was “ponzinomics” and “rug pulls.” In self-driving cars, it was “just five years away!” In AI, it’s seeing just how much unethical shit you can get away with.
Perplexity, which is in ongoing talks to raise hundreds of millions of dollars, is trying to create a Google Search competitor. Perplexity isn’t trying to create a “search engine,” though — it wants to create an “answer engine.” The idea is that instead of combing through a bunch of results to answer your own question with a primary source, you’ll simply get an answer Perplexity has found for you. “Factfulness and accuracy is what we care about,” Perplexity CEO Aravind Srinivas told The Verge.
That means that Perplexity is basically a rent-seeking middleman on high-quality sources. The value proposition on search, originally, was that by scraping the work done by journalists and others, Google’s results sent traffic to those sources. But by providing an answer, rather than pointing people to click through to a primary source, these so-called “answer engines” starve the primary source of ad revenue — keeping that revenue for themselves. Perplexity is among a group of vampires that include Arc Search and Google itself. — Read More
Polynomial Time Cryptanalytic Extraction of Neural Network Models
Billions of dollars and countless GPU hours are currently spent on training Deep Neural Networks (DNNs) for a variety of tasks. Thus, it is essential to determine the difficulty of extracting all the parameters of such neural networks when given access to their black-box implementations. Many versions of this problem have been studied over the last 30 years, and the best current attack on ReLU-based deep neural networks was presented at Crypto’20 by Carlini, Jagielski, and Mironov. It resembles a differential chosen plaintext attack on a cryptosystem, which has a secret key embedded in its black-box implementation and requires a polynomial number of queries but an exponential amount of time (as a function of the number of neurons)
In this paper, we improve this attack by developing several new techniques that enable us to extract with arbitrarily high precision all the real-valued parameters of a ReLU-based DNN using a polynomial number of queries and a polynomial amount of time. We demonstrate its practical efficiency by applying it to a full-sized neural network for classifying the CIFAR10 dataset, which has 3072 inputs, 8 hidden layers with 256 neurons each, and about 1.2 million neuronal parameters. An attack following the approach by Carlini et al. requires an exhaustive search over 2256 possibilities. Our attack replaces this with our new techniques, which require only 30 minutes on a 256-core computer. — Read More