NLP 101: What is Natural Language Processing?

How did NLP start?

Natural language processing (NLP) is with no doubt — in my opinion — the most famous field of data science. Over the past decade, it has gained a lot of traction “buzz” in both industry and academia.

But, the truth is, NLP is not a new field at all. The human desire for computers to comprehend and understand our language has been there since the creation of computers. Yes, those old computers that could barely run multiple programs at the same time, nevertheless comprehend the complexity of natural languages! Read More

#nlp

When AI Sees a Man, It Thinks ‘Official.’ A Woman? ‘Smile’

A new paper renews concerns about bias in image recognition services offered by Google, Microsoft, and Amazon.

Men often judge women by their appearance. Turns out, computers do too.

When US and European researchers fed pictures of congressmembers to Google’s cloud image recognition service, the service applied three times as many annotations related to physical appearance to photos of women as it did to men.

…“It results in women receiving a lower status stereotype: That women are there to look pretty and men are business leaders.” Read More

#bias

AI decision automation: Where it works, and where it doesn’t

Some companies are using AI for end-to-end decision-making, but not all decisions can be made without human intervention. Here are some real-world cases.

As artificial intelligence (AI) ascends in the marketplace, the burning question remains as to how far it can be trusted when it comes to the “last mile,” the final decision that follows the analytics and recommendations that AI yields.

… “Not all decisions in organizations can be fully automated, and some of these will require human intervention.” Read More

#augmented-intelligence

Making Sense of the AI Landscape

As more and more companies incorporate AI tools into their operations, business leaders need to find ways to adapt. But the term “AI” in fact covers a huge spectrum of different things. How can leaders start to make sense of this vast array of new systems? The authors analyzed over 800 different AI tools and found that the problems they solved fell into four distinct categories: rote tasks, simple tasks that require ethical decision-making, creative tasks with limited ethical implications, and tasks that require both creativity and ethics. Armed with this simple framework, leaders can start to get a handle on the human capabilities they’ll need to invest in to make the most of these new tools. Read More

#strategy

The way we train AI is fundamentally flawed

The process used to build most of the machine-learning models we use today can’t tell if they will work in the real world or not—and that’s a problem.

It’s no secret that machine-learning models tuned and tweaked to near-perfect performance in the lab often fail in real settings. This is typically put down to a mismatch between the data the AI was trained and tested on and the data it encounters in the world, a problem known as data shift. For example, an AI trained to spot signs of disease in high-quality medical images will struggle with blurry or cropped images captured by a cheap camera in a busy clinic.

Now a group of 40 researchers across seven different teams at Google have identified another major cause for the common failure of machine-learning models. Called “underspecification,” it could be an even bigger problem than data shift. Read More

#performance, #training

Machine learning cheat sheet

This cheat sheet contains many classical equations and diagrams on machine learning, which will help you quickly recall knowledge and ideas on machine learning.

The cheat sheet will also appeal to someone who is preparing for a job interview related to machine learning. Read More (PDF Version Here)

#machine-learning

GANs with Keras and TensorFlow

… Generative Adversarial Networks were first introduced by Goodfellow et al. in their 2014 paper, Generative Adversarial Networks. These networks can be used to generate synthetic (i.e., fake) images that are perceptually near identical to their ground-truth authentic originals.

In this tutorial you will learn how to implement Generative Adversarial Networks (GANs) using Keras and TensorFlow. Read More

#gans, #python

Steve Jobs’s last gambit: Apple’s M1 Chip

Even as Apple’s final event of 2020 gradually becomes a speck in the rearview mirror, I can’t help continually thinking about the new M1 chip that debuted there. I am, at heart, an optimist when it comes to technology and its impact on society. And my excitement about the new Apple Silicon is not tied to a single chip, a single computer, or a single company. It is really about the continuing — and even accelerating — shift to the next phase of computing.

… Today, we need our computers to be capable of handling many tasks — and doing so with haste. The emphasis is less on performance and more about capabilities. Everyone is heading toward this future, including Intel, AMD, Samsung, Qualcomm, and Huawei. But Apple’s move has been more deliberate, more encompassing, and more daring.

Steve Jobs’s last gambit was challenging the classic notion of the computer, and the M1 is Apple’s latest maneuver. Read More

#nvidia

Guide to Visual Recognition Datasets for Deep Learning with Python Code

Some visual recognition datasets have set benchmarks for supervised learning (Caltech101, Caltech256, CaltechBirds, CIFAR-10 andCIFAR-100) and unsupervised or self-taught learning algorithms(STL10) using deep learning across different object categories for various researches and developments. Under visual recognition mainly comes image classification, image segmentation and localization, object detection and various other use case problems. Many of these datasets have APIs present across some deep learning frameworks. This article talks about some of these datasets features along with some python code snippets on how to use them. Read More

#image-recognition, #python

How Compute Divide Leads To Discrimination In AI Research

Science doesn’t discriminate, but probably technology does, at least in terms of accessibility. New research has found that the unequal distribution of compute power in academia is promoting inequality in the era of deep learning. The study conducted jointly by AI researchers from Virginia Tech and Western University found that this de-democratisation of AI has pushed people to leave academia and opt for high-paying industry jobs.

The study found that the amount of compute power at elite universities, ranked among top 50 as per QS World University Rankings, is much more than at mid-to-low tier institutions. For the research, authors analysed over 170,000 papers presented across 60 prestigious computer science conferences such as ACL, ICML, and NeurIPS in categories like computer vision, data mining, NLP, and machine learning. Read More

Read the Paper

#artificial-intelligence, #nvidia