In this episode of the podcast, Sam Harris speaks with Eric Schmidt about the ways artificial intelligence is shifting the foundations of human knowledge and posing questions of existential risk. Read More
#podcasts, #artificial-intelligenceMonthly Archives: April 2022
Is Interpreting ML Models a Dead-End?
The interpretation process can be detached from the model architecture
Models are nowadays our primary tool to understand phenomena around us, from the movement of the stars to the opinion and the behavior of social groups. With the explosion of machine learning (ML) theory and its technology, we have been equipped with the most powerful tool in the history of science to understand a phenomenon and predict its outcome under given conditions. By now, we are able to detect fraud, design transportation plans, and made progress on self-driving cars.
With the potential of machine learning to model a phenomenon, the problem of its complexity has stood over its democratization. While many models have the unquestionable ability to give us the predictions we are looking for, their use in many industries is still limited for reasons such as lack of computational power or limited software availability. One other reason, with little discussion as a limiting factor, is the claimed impossibility to interpret highly complex black-box or deep learning (DL) models. In this claim, many practitioners find themselves making a compromise for lower prediction accuracy with higher model interpretability. Read More
The Paper that can Change the Foundations of all Blockchain Cryptography
One of the biggest breakthroughts in modern cryptography can have a deep impact in blockchain protocols.
Cryptography is at the heart of many blockchain protocols. From traditional proof-of-work(PoW) to L2 modern approaches such as ZK-rollups, many advanced cryptographic methods provide the foundation of blockchain runtimes and protocols. Consequently, there is an omnipresent question about the security robustness of any blockchain architecture. Naively, we assume that blockchain cryptographic implementations that have survived complex attacks are inherently secured but that’s far from being an empirical proof. Is there a better way to verify the robustness of security algorithms. The answers seem to be in a new paper that just won the National Security Agency(NSA)’s “Best Cybersecurity Research Paper Competition” causing a lot of noise within the cryptography research community.
Titled “On One-way Functions and Kolmogorov Complexity” the paper provides an answer to one of the quincentennial problems in cryptography. The problem at hand is related to the existence of a mathematical construct called “one-way functions” that can prove whether a method such as a zero knowledge proof in an L2 blockchain, is cryptographically secured. Read More
Former Intelligence Officials, Citing Russia, Say Big Tech Monopoly Power is Vital to National Security
When the U.S. security state announces that Big Tech’s centralized censorship power must be preserved, we should ask what this reveals about whom this regime serves.
A group of former intelligence and national security officials on Monday issued a jointly signed letter warning that pending legislative attempts to restrict or break up the power of Big Tech monopolies — Facebook, Google, and Amazon — would jeopardize national security because, they argue, their centralized censorship power is crucial to advancing U.S. foreign policy. The majority of this letter is devoted to repeatedly invoking the grave threat allegedly posed to the U.S. by Russia as illustrated by the invasion of Ukraine, and it repeatedly points to the dangers of Putin and the Kremlin to justify the need to preserve Big Tech’s power in its maximalist form. Any attempts to restrict Big Tech’s monopolistic power would therefore undermine the U.S. fight against Moscow. Read More
Why it’s so damn hard to make AI fair and unbiased
There are competing notions of fairness — and sometimes they’re totally incompatible with each other.
…Computer scientists are used to thinking about “bias” in terms of its statistical meaning: A program for making predictions is biased if it’s consistently wrong in one direction or another. (For example, if a weather app always overestimates the probability of rain, its predictions are statistically biased.) That’s very clear, but it’s also very different from the way most people colloquially use the word “bias” — which is more like “prejudiced against a certain group or characteristic.”
The problem is that if there’s a predictable difference between two groups on average, then these two definitions will be at odds. If you design your search engine to make statistically unbiased predictions about the gender breakdown among CEOs, then it will necessarily be biased in the second sense of the word. And if you design it not to have its predictions correlate with gender, it will necessarily be biased in the statistical sense. Read More
Are you at fault? Patterned after subreddit r/AmITheAsshole this AI will let you know
AYTA is a project created by WTTDOTM and Alex Petros and presented by Digital Void. It is a collection of 3 unique AI text generation models trained on posts and comments from r/AmITheAsshole and answers the questions that you’ve been asking on reddit for years: was my response to this reasonable, or am I the asshole in this situation?
AYTA responses are auto-generated and based on different datasets. The red model has only been trained on YTA responses and will always say you are at fault. The green model has only been trained on NTA responses and will always absolve you. And the white model was trained on the pre-filtered data. Have fun! Read More
Amazon releases 51-language dataset for language understanding
MASSIVE dataset and Massively Multilingual NLU (MMNLU-22) competition and workshop will help researchers scale natural-language-understanding technology to every language on Earth.
Imagine that all people around the world could use voice AI systems such as Alexa in their native tongues.
One promising approach to realizing this vision is massively multilingual natural-language understanding (MMNLU), a paradigm in which a single machine learning model can parse and understand inputs from many typologically diverse languages. By learning a shared data representation that spans languages, the model can transfer knowledge from languages with abundant training data to those in which training data is scarce. Read More
The Power of Natural Language Processing
Summary: The conventional wisdom around AI has been that while computers have the edge over humans when it comes to data-driven decision making, it can’t compete on qualitative tasks. That, however, is changing. Natural language processing (NLP) tools have advanced rapidly and can help with writing, coding, and discipline-specific reasoning. Companies that want to make use of this new tech should focus on the following: 1) Identify text data assets and determine how the latest techniques can be leveraged to add value for your firm, 2) understand how you might leverage AI-based language technologies to make better decisions or reorganize your skilled labor, 3) begin incorporating new language-based AI tools for a variety of tasks to better understand their capabilities, and 4) don’t underestimate the transformative potential of AI. Read More
#nlpPlanting Undetectable Backdoors in Machine Learning Models
Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns of trust. This work studies possible abuses of power by untrusted learners. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.
Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, by constructing undetectable backdoor for an “adversariallyrobust” learning algorithm, we can produce a classifier that is indistinguishable from a robust classifier, but where every input has an adversarial example! In this way, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness. Read More
Artificial intelligence is creating a new colonial world order
his story is the introduction to MIT Technology Review’s series on AI colonialism, which was supported by the MIT Knight Science Journalism Fellowship Program and the Pulitzer Center. YOU CAN READ PART ONE HERE.
…The AI industry does not seek to capture land as the conquistadors of the Caribbean and Latin America did, but the same desire for profit drives it to expand its reach. The more users a company can acquire for its products, the more subjects it can have for its algorithms, and the more resources—data—it can harvest from their activities, their movements, and even their bodies.
Neither does the industry still exploit labor through mass-scale slavery, which necessitated the propagation of racist beliefs that dehumanized entire populations. But it has developed new ways of exploiting cheap and precarious labor, often in the Global South, shaped by implicit ideas that such populations don’t need—or are less deserving of—livable wages and economic stability.
MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid. Read More