2020 state of enterprise machine learning

Algorithmia has talked with thousands of people in various stages of machine learning (ML) maturity and in various roles connected to ML. Following the report we published last year , we conducted a two-prong survey this year, polling nearly 750 people across all industries from companies actively engaged in building ML lifecycles to those just beginning their ML journeys, finding that more than two-thirds of those who responded said their AI budgets are growing, while only 2 percent are cutting back.

  • 40 percent of companies surveyed employed more than 10 data scientists, double the rate in 2018, when Algorithmia conducted its previous study. 3 percent employed more than 1,000 data scientists.
  • Many respondents said they’re in the early stages, such as evaluating use cases and developing models.
  • Many struggle with deployment. Half of those surveyed took between 8 days and three months to deploy a model. 5 percent took a year or more. Generally, larger companies took longer to deploy models, but the authors suggest that more mature machine learning teams were able to move faster.
  • Scaling models is the biggest impediment, cited by 43 percent of respondents.

Read More

#strategy

Characterizing Bias in Compressed Models

The popularity and widespread use of pruning and quantization is driven by the severe resource constraints of deploying deep neural networks to environments with strict latency, memory and energy requirements. These techniques achieve high levels of compression with negligible impact on top-line metrics (top-1 and top-5 accuracy). However, overall accuracy hides disproportionately high errors on a small subset of examples; we call this subset Compression Identified Exemplars (CIE). We further establish that for CIE examples, compression amplifies existing algorithmic bias. Pruning disproportionately impacts performance on underrepresented features, which often coincides with considerations of fairness. Given that CIE is a relatively small subset but a great contributor of error in the model, we propose its use as a human-in-the-loop auditing tool to surface a tractable subset of the dataset for further inspection or annotation by a domain expert. We provide qualitative and quantitative support that CIE surfaces the most challenging examples in the data distribution for human-in-the-loop auditing. Read More

#bias

Bird by Bird using Deep Learning

This article demonstrates how deep learning models used for image-related tasks can be advanced in order to address the fine-grained classification problem. For this objective, we will walk through the following two parts. First, you will get familiar with some basic concepts of computer vision and convolutional neural networks, while the second part demonstrates how to apply this knowledge to a real-world problem of bird species classification using PyTorch. Specifically, you will learn how to build your own CNN model – ResNet-50, – to further improve its performance using transfer learning, auxiliary task and attention-enhanced architecture, and even a little more. Read More

#image-recognition, #python

Organizing your team for innovation

A short while ago, we were happy to receive the news that ML6 won the 2020 ‘AI Innovator of the Year’ award in the prestigious Data News Awards for Excellence. This award recognizes IT companies that set the trend in creating innovation and adopting artificial intelligence technologies. The jury specifically recognized our ability to innovate together with our clients, which is something we are very proud of.

To celebrate our award, we would like to share some insights on how we foster innovation at ML6. Everyone at ML6 works hard every day to make sure that we stay at the forefront of innovation and create value with our clients, and while it’s hard to capture this spirit into words, we’ll try our best. To limit the scope of this post, we’ll mostly focus on our delivery team, but we’re happy to talk to anyone who would like to know more! Read More

#ai-first, #strategy

Finding the balance between edge AI vs. cloud AI

AI at the edge allows real-time machine learning through localized processing, allowing for immediate data processing, detailed security and heightened customer experience. At the same time, many enterprises are looking to push AI into the cloud, which can reduce barriers to implementation, improve knowledge sharing and support larger models. The path forward lies in finding a balance that takes advantage of cloud and edge strengths.

…In a perfect world, we’d centralize all workloads in the cloud for simplicity and scale, however, factors such as latency, bandwidth, autonomy, security and privacy are necessitating more AI models to be deployed at the edge, proximal to the data source. Read More

#cloud, #iot

Introduction to Linear Algebra for Applied Machine Learning with Python

Linear algebra is to machine learning as flour to bakery: every machine learning model is based in linear algebra, as every cake is based in flour. It is not the only ingredient, of course. Machine learning models need vector calculus, probability, and optimization, as cakes need sugar, eggs, and butter. Applied machine learning, like bakery, is essentially about combining these mathematical ingredients in clever ways to create useful (tasty?) models.

This document contains introductory level linear algebra notes for applied machine learning. It is meant as a reference rather than a comprehensive review. … The notes are based on a series of (mostly) freely available textbooks, video lectures, and classes I’ve read, watched and taken in the past. If you want to obtain a deeper understanding or to find exercises for each topic, you may want to consult those sources directly. Read More

#python

Interesting AI papers published in 2020

A curated list of AI papers by Ajit Jaokar, based on his research and teaching, divided into Core, ones influencing how AI algorithms could develop in future, and Interesting, based on his interests. Read More

#artificial-intelligence

7 popular activation functions you should know in Deep Learning and how to use them with Keras and TensorFlow 2

In artificial neural networks (ANNs), the activation function is a mathematical “gate” in between the input feeding the current neuron and its output going to the next layer [1].

The activation functions are at the very core of Deep Learning. They determine the output of a model, its accuracy, and computational efficiency. In some cases, activation functions have a major effect on the model’s ability to converge and the convergence speed.

In this article, you’ll learn seven of themost popular activation functions in Deep Learning — Sigmoid, Tanh, ReLU, Leaky ReLU, PReLU, ELU, and SELU — and how to use them with Keras and TensorFlow 2. Read More

#frameworks, #python

Five Strategies for Putting AI at the Center of Digital Transformation

Across industries, companies are applying artificial intelligence to their businesses, with mixed results. “What separates the AI projects that succeed from the ones that don’t often has to do with the business strategies organizations follow when applying AI,” writes Wharton professor of operations, information and decisions Kartik Hosanagar in this opinion piece. Hosanagar is faculty director of Wharton AI for Business, a new Analytics at Wharton initiative that will support students through research, curriculum, and experiential learning to investigate AI applications. He also designed and instructs Wharton Online’s Artificial Intelligence for Business course. Read More

#strategy

A Student Uses A.I. to Write a New Hamilton Song

Read More

#artificial-intelligence, #videos