Update, Nov. 20, 2020, at 12:03 p.m.: After creeping into the Top 100 on Amazon, this title was removed from the site on Friday morning. Other “University Press” books are still for sale, however.
Perhaps you’ve heard that there is an exciting new Barack Obama book that everyone’s talking about! I’m not talking about A Promised Land, the 751-page memoir and large physical object for which publisher Crown paid Obama tens of millions of dollars and which Obama spent four years writing (without a ghost, he brags).
No, I’m talking about Barack Obama Book, a 61-page tome by an author named “University Press.” Why is Barack Obama Book selling so well? Thanks to sponsored listings and canny search engine optimization, the book appears above Barack Obama’s actual memoir if you search Amazon for—you guessed it—“barack obama book.” Read More
Daily Archives: November 20, 2020
Zero-shot Learning for Relation Extraction
Most existing supervised and few-shot learning relation extraction methods have relied on labeled training data. However, in real-world scenarios, there exist many relations for which there is no available training data. We address this issue from the perspective of zero-shot learning (ZSL) which is similar to the way humans learn and recognize new concepts with no prior knowledge. We propose a zero-shot learning relation extraction (ZSLRE) framework, which focuses on recognizing novel relations that have no corresponding labeled data available for training. Our proposed ZSLRE model aims to recognize new relations based on prototypical networks that are modified to utilize side (auxiliary) information. The additional use of side information allows those modified prototype networks to recognize novel relations in addition to recognized previously known relations. We construct side information from labels and their synonyms, hypernyms of name entities, and keywords. We build an automatic hypernym extraction framework to help get hypernyms of various name entities directly from web. We demonstrate using extensive experiments on two public datasets (NYT and FewRel)that our proposed model significantly outperforms state-of-the-art methods on supervised learning, few-shot learning and zero-shot learning tasks. Our experimental results also demonstrate the effectiveness and robustness of our proposed model in a combination scenario. Once accepted for publication, we will publish ZSLRE’s source code and datasets to enable reproducibility and encourage further research. Read More
#performanceAn Introduction to Federated Learning
Federated (de-centralized) learning (FL) is an approach that downloads the current model and computes an updated model at the device itself using local data, rather than going to one pool to update the device. These locally trained models are then sent from the devices back to the central server where they are aggregated and then a single consolidated and improved global model is sent back to the devices. Federated learning makes it possible for AI algorithms to gain experience from a vast range of data located at different sites. Read More
Citizens are turning face recognition on unidentified police
In part one of an audio series on face recognition, we explore the unexpected ways the technology is being used.
The new series of our AI podcast, In Machines We Trust, is all about face recognition. In part one of the series, Jennifer Strong and the team at MIT Technology Review explore the unexpected ways the technology is being used, including how it is being turned on police. Read More
The state of AI in 2020
The results of this year’s McKinsey Global Survey on artificial intelligence (AI) suggest that organizations are using AI as a tool for generating value. Increasingly, that value is coming in the form of revenues. A small contingent of respondents coming from a variety of industries attribute 20 percent or more of their organizations’ earnings before interest and taxes (EBIT) to AI. These companies plan to invest even more in AI in response to the COVID-19 pandemic and its acceleration of all things digital. This could create a wider divide between AI leaders and the majority of companies still struggling to capitalize on the technology; however, these leaders engage in a number of practices that could offer helpful hints for success. And while companies overall are making some progress in mitigating the risks of AI, most still have a long way to go. Read More
What is AI? We made this to help.
We made an podcast game to help you determine what is, or isn’t, AI
Defining what is, or isn’t artificial intelligence can be tricky (or tough). So much so, even the experts get it wrong sometimes. That’s why MIT Technology Review’s Senior AI Reporter Karen Hao created a flowchart to explain it all. In this bonus content our Host Jennifer Strong and her team reimagine Hao’s reporting, gamifying it into an audio postcard of sorts. Read More
Nvidia developed a radically different way to compress video calls
Nvidia Maxine uses Generative Adversarial Networks to re-create video frames.
Last month, Nvidia announced a new platform called Maxine that uses AI to enhance the performance and functionality of video conferencing software. The software uses a neural network to create a compact representation of a person’s face. This compact representation can then be sent across the network, where a second neural network reconstructs the original image—possibly with helpful modifications.
Nvidia says that its technique can reduce the bandwidth needs of video conferencing software by a factor of 10 compared to conventional compression techniques. It can also change how a person’s face is displayed. For example, if someone appears to be facing off-center due to the position of her camera, the software can rotate her face to look straight instead. Software can also replace someone’s real face with an animated avatar. Read More
NLP 101: What is Natural Language Processing?
How did NLP start?
Natural language processing (NLP) is with no doubt — in my opinion — the most famous field of data science. Over the past decade, it has gained a lot of traction “buzz” in both industry and academia.
But, the truth is, NLP is not a new field at all. The human desire for computers to comprehend and understand our language has been there since the creation of computers. Yes, those old computers that could barely run multiple programs at the same time, nevertheless comprehend the complexity of natural languages! Read More