Will China Overtake the U.S. in Artificial Intelligence Research?

China not only has the world’s largest population and looks set to become the largest economy — it also wants to lead the world when it comes to artificial intelligence (AI).

In 2017, the Communist Party of China set 2030 as the deadline for this ambitious AI goal, and, to get there, it laid out a bevy of milestones to reach by 2020. These include making significant contributions to fundamental research, being a favoured destination for the world’s brightest talents and having an AI industry that rivals global leaders in the field.

As this first deadline approaches, researchers note impressive leaps in the quality of China’s AI research. They also predict a shift in the nation’s ability to retain homegrown talent. That is partly because the government has implemented some successful retainment programmes and partly because worsening diplomatic and trade relations mean that the United States — its main rival when it comes to most things, including AI — has become a less-attractive destination. Read More

#china-vs-us

Model Cards for Model Reporting

Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation. Read More

#devops, #explainability, #governance

Datasheets for Datasets

The machine learning community currently has no standardized process for documenting datasets. To address this gap, we propose datasheets for datasets. In the electronics industry, every component, no matter how simple or complex, is accompanied with a datasheet that describes its operating characteristics, test results, recommended uses, and other information. By analogy, we propose that every dataset be accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses, and so on. Datasheets for datasets will facilitate better communication between dataset creators and dataset consumers, and encourage the machine learning community to prioritize transparency and accountability. Read More

#devops, #explainability, #governance