LinkedIn’s job-matching AI was biased. The company’s solution? More AI.

ZipRecruiter, CareerBuilder, LinkedIn—most of the world’s biggest job search sites use AI to match people with job openings. But the algorithms don’t always play fair.

Years ago, LinkedIn discovered that the recommendation algorithms it uses to match job candidates with opportunities were producing biased results. The algorithms were ranking candidates partly on the basis of how likely they were to apply for a position or respond to a recruiter. The system wound up referring more men than women for open roles simply because men are often more aggressive at seeking out new opportunities.

LinkedIn discovered the problem and built another AI program to counteract the bias in the results of the first. Meanwhile, some of the world’s largest job search sites—including CareerBuilder, ZipRecruiter, and Monster—are taking very different approaches to addressing bias on their own platforms, as we report in the newest episode of MIT Technology Review’s podcast “In Machines We Trust.” Read More

#bias, #podcasts

Why scientists think this hack is crucial for lifelong learning

In season two of the show 30 Rock, Tina Fey’s character Liz Lemon says to her boss, “I have to do that thing rich people do, where they turn money into more money.” While our brains can’t passively invest in stocks for us and watch the money grow, they can do almost exactly that when working on a new skill: turn learning into more learning.

All you have to do is sit back and relax.

This is exemplified by a study published June 8 in Cell Reports. Scientists examined the fluctuating magnetic fields in the brains of participants asked to perform a sequential task repeatedly. They observed that in the brief breaks between the practice rounds, that task was replayed rapidly in their minds as if learning on its own. Read More

#human

The Memo

In 2002, Amazon’s Jeff Bezos issued a memo that has entered tech industry canon. The memo, known as the “API Mandate”, is generally perceived as being a statement about technology at Amazon, and is therefore widely admired by technologists and wholly ignored by executives. This is unfortunate, because it’s no exaggeration to say that the API Mandate completely transformed Amazon as a business and laid the foundation for its success. Better still, unlike many things that global technology titans do, it is something that can be replicated and put to use by almost any business.

In this post, we’ll talk about the memo, and how it created the systems and incentives for radical organisational transformation. Read More

#strategy

NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems

Can trust, one of the primary bases of relationships throughout history, be quantified and measured?

Illustration shows how people evaluating two different tasks performed by AI -- music selection and medical diagnosis -- might trust the AI varying amounts because the risk level of each task is different.

Every time you speak to a virtual assistant on your smartphone, you are talking to an artificial intelligence — an AI that can, for example, learn your taste in music and make song recommendations that improve based on your interactions. However, AI also assists us with more risk-fraught activities, such as helping doctors diagnose cancer. These are two very different scenarios, but the same issue permeates both: How do we humans decide whether or not to trust a machine’s recommendations?

This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems. The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. Read More

#nist, #trust

Reward is enough

In this article we hypothesise that intelligence, and its associated abilities, can be understood as subserving the maximisation of reward. Accordingly, reward is enough to drive behaviour that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalisation and imitation. This is in contrast to the view that specialised problem formulations are needed for each ability, based on other signals or objectives. Furthermore, we suggest that agents that learn through trial and error experience to maximise reward could learn behaviour that exhibits most if not all of these abilities, and therefore that powerful reinforcement learning agents could constitute a solution to artificial general intelligence. Read More

#gans, #reinforcement-learning