China’s AI Talent Base Is Growing, and then Leaving

When China decides that it wants to establish leadership in a particular strategic area, its approach has tended to follow the mantra “the more, the better.” So, too, has the pursuit of strengthening artificial intelligence (AI) talent followed this principle.

AI talent is usually cultivated within universities and their extended research ecosystem. And in China, shifts in college majors are often perceived as an indicator of national priorities and resource allocation. So when Beijing deemed AI a “special discipline” as far back as 2012, dozens of universities rushed to set up their own AI specialization and degree programs, attracting thousands of students.

Credit where credit is due. China has been successful in producing AI talent, evidenced by the rapid growth of AI human capital over the last decade. But talent acquisition is only one part of the puzzle—equally important is retaining that talent so they contribute to China’s AI aspirations over the long term. On the retention front, however, China has not done nearly as well. Read More

#china-ai

Brain Development

Read More

#human

Hierarchy of transcriptomic specialization across human cortex captured by myelin map topography

Hierarchy provides a unifying principle for the macroscale organization of anatomical and functional properties across primate cortex, yet microscale bases of specialization across human cortex are poorly understood. Anatomical hierarchy is conventionally informed by invasive tract-tracing measurements, creating a need for a principled proxy measure in humans. Moreover, cortex exhibits marked interareal variation in gene expression, yet organizing principles of cortical transcription remain unclear. We hypothesized that specialization of cortical microcircuitry involves hierarchical gradients of gene expression. We found that a noninvasive neuroimaging measure—MRI-derived T1-weighted/T2-weighted (T1w/T2w) mapping—reliably indexes anatomical hierarchy, and it captures the dominant pattern of transcriptional variation across human cortex. We found hierarchical gradients in expression profiles of genes related to microcircuit function, consistent with monkey microanatomy, and implicated in neuropsychiatric disorders. Our findings identify a hierarchical axis linking cortical transcription and anatomy, along which gradients of microscale properties may contribute to the macroscale specialization of cortical function. Read More

#human

Efficient Video Generation on Complex Datasets

Generative models of natural images have progressed towards high fidelity samples by the strong leveraging of scale. We attempt to carry this success to the field of video modeling by showing that large Generative Adversarial Networks trained on the complex Kinetics-600 dataset are able to produce video samples of substantially higher complexity than previous work. Our proposed model, Dual Video Discriminator GAN (DVD-GAN), scales to longer and higher resolution videos by leveraging a computationally efficient decomposition of its discriminator. We evaluate on the related tasks of video synthesis and video prediction, and achieve new state of the art Fréchet Inception Distance on prediction for Kinetics-600,as well as state of the art Inception Score for synthesis on the UCF-101 dataset,alongside establishing a strong baseline for synthesis on Kinetics-600. Read More

#gans, #image-recognition

How teaching AI to be curious helps machines learn for themselves

When playing a video game, what motivates you to carry on?

This question is perhaps too broad to yield a single answer, but if you had to sum up why you accept that next quest, jump into a new level, or cave and playjust one more turn, the simplest explanation might be “curiosity” — just to see what happens next. And as it turns out, curiosity is a very effective motivator when teaching AI to play video games, too.IN A GAME WITHOUT REWARDS, TEACHING AI IS DIFFICULT

Research published this week by artificial intelligence lab OpenAI explains how an AI agent with a sense of curiosity outperformed its predecessors playing the classic 1984 Atari game Montezuma’s Revenge. Read More

#reinforcement-learning

Please Stop Explaining Black Box Models for High-Stakes Decisions

Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward – it is to design models that are inherently interpretable. Read More

#explainability

A GUIDE TO NOT KILLING OR MUTILATING ARTIFICIAL INTELLIGENCE RESEARCH

What’s the fastest way to build a jig-saw puzzle? That was the question posed by Michael Polanyi in 1962. An obvious answer is to enlist help. In what way, then, could the helpers be coordinated most efficiently? If you divided pieces between the helpers, then progress would slow to a crawl. You couldn’t know how to usefully divide the pieces without first solving the puzzle.

Polanyi found it obvious that the fastest way to build a jig-saw puzzle is to let everyone work on it together in full sight of each other. No central authority could accelerate progress. “Under this system,” Polanyi wrote, “each helper will act on his own initiative, by responding to the latest achievements of the others, and the completion of their joint task will be greatly accelerated.” Read More

#artificial-intelligence

The algorithms that detect hate speech online are biased against black people

Platforms like FacebookYouTube, and Twitter are banking on developing artificial intelligence technology to help stop the spread of hateful speech on their networks. The idea is that complex algorithms that use natural language processing will flag racist or violent speech faster and better than human beings possibly can. Doing this effectively is more urgent than ever in light of recent mass shootings and violence linked to hate speech online.

But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. In one study, researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English (which is commonly spoken by black people in the US). Another study found similar widespread evidence of racial bias against black speech in five widely used academic data sets for studying hate speech that totaled around 155,800 Twitter posts. Read More

#bias, #nlp

Deepmind’s losses and the future of Artificial Intelligence

ALPHABET’S DEEPMIND LOST $572 million last year. What does it mean?

DeepMind, likely the world’s largest research-focused artificial intelligence operation, is losing a lot of money fast, more than $1 billion in the past three years. DeepMind also has more than $1 billion in debt due in the next 12 months.

Does this mean that AI is falling apart? Read More

#artificial-intelligence, #reinforcement-learning

An elegant way to represent forward propagation and back propagation in a neural network

Read More

#neural-networks