An exploration of the class imbalance problem, the accuracy paradox and some techniques to solve this problem by using the Imbalanced-Learn library.
One of the challenges that arise when developing machine learning models for classification is class imbalance. Most of the machine learning algorithms for classification were developed assuming balanced classes however, in real life it is not common to have properly balanced data. Due to this, various alternatives have been proposed to address this problem as well as tools to apply these solutions. Such is the case imbalanced-learn [1], a python library that implements the most relevant algorithms to tackle the problem of class imbalance.
In this blog we are going to see what class imbalance is, the problem of implementing accuracy as a metric for unbalanced classes, what random under-sampling and random over-sampling is and imbalanced-learn as an alternative tool to address the class imbalance problem in an appropriate way. Read More
Monthly Archives: February 2021
China Censors the Internet. So Why Doesn’t Russia?
The Kremlin has constructed an entire infrastructure of repression but has not displaced Western apps. Instead, it is turning to outright intimidation.
Margarita Simonyan, the editor in chief of the Kremlin-controlled RT television network, recently called on the government to block access to Western social media.
She wrote: “Foreign platforms in Russia must be shut down.”
Her choice of social network for sending that message: Twitter.
While the Kremlin fears an open internet shaped by American companies, it just can’t quit it. Read More
Is Google’s AI research about to implode?
What does Timnit Gebru’s firing and the recent papers coming out of Google tell us about the state of research at the world’s biggest AI research department.
The high point for Google’s research in to Artifical Intelligence may well turn out to be the 19th of October 2017. This was the date that David Silver and his co-workers at DeepMind published a report, in the journal Nature, showing how their deep-learning algorithm AlphaGo Zero was a better Go player than not only the best human in the world, but all other Go-playing computers.
What was most remarkable about AlphaGo Zero was that it worked without human assistance. … But there was a problem. Maybe it wasn’t Silver and his colleagues’ problem, but it was a problem all the same. The DeepMind research program had shown what deep neural networks could do, but it had also revealed what they couldn’t do. Read More
The Thousand Brains Theory of Intelligence
In our most recent peer-reviewed paper, A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex, we put forward a novel theory for how the neocortex works. The Thousand Brains Theory of Intelligence proposes that rather than learning one model of an object (or concept), the brain builds many models of each object. Each model is built using different inputs, whether from slightly different parts of the sensor (such as different fingers on your hand) or from different sensors altogether (eyes vs. skin). The models vote together to reach a consensus on what they are sensing, and the consensus vote is what we perceive. It’s as if your brain is actually thousands of brains working simultaneously.
A key insight of our theory is based on an understanding of grid cells, neurons which are found in an older part of the brain responsible for navigation and knowing where you are in the world. Scientists have made great progress over the past few decades in understanding that the function of grid cells is to represent the location of a body in an environment. Recent experimental evidence suggests that grid cells also are present in the neocortex. We propose that grid cells exist throughout the neocortex, in every region and in every cortical column, and that they define a location-based framework for how the neocortex works. The same grid cell-based mechanism used in the older part of the brain to learn the structure of environments is used by the neocortex to learn the structure of objects, not only what they are, but also how they behave. Read More
@TomerUllman: I had an AI (GPT3) generate 10 “thought experiments” (based on classic ones as input), and asked @WhiteBoardG to sketch them.
EU report warns that AI makes autonomous vehicles ‘highly vulnerable’ to attack
The dream of autonomous vehicles is that they can avoid human error and save lives, but a new European Union Agency for Cybersecurity (ENISA) report has found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. Attacks considered in the report include sensor attacks with beams of light, overwhelming object detection systems, back-end malicious activity, and adversarial machine learning attacks presented in training data or the physical world.
“The attack might be used to make the AI ‘blind’ for pedestrians by manipulating for instance the image recognition component in order to misclassify pedestrians. This could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks,” the report reads. “The absence of sufficient security knowledge and expertise among developers and system designers on AI cybersecurity is a major barrier that hampers the integration of security in the automotive sector.” Read More
Deep Reinforcement Learning: Neural Networks for Learning Control Laws
Reinforcement Learning: Machine Learning Meets Control Theory
A.I. Generates 3D Virtual Concerts from Sound
The AI Research Paper Was Real. The ‘Coauthor’ Wasn’t
An IBM researcher found his name on two papers with which he had no connection. A different paper listed a fictitious author by the name of “Bill Franks.”
David Cox, the co-director of a prestigious artificial intelligence lab in Cambridge, Massachusetts, was scanning an online computer science bibliography in December when he noticed something odd—his name listed as an author alongside three researchers in China whom he didn’t know on two papers he didn’t recognize.
At first, he didn’t think much of it. The name Cox isn’t uncommon, so he figured there must be another David Cox doing AI research. “Then I opened up the PDF and saw my own picture looking back at me,” Cox says. “It was unbelievable.” Read More