These are exciting times for the artificial intelligence community. Interest in the field is growing at an accelerating pace, registration at academic and professional machine learning courses is soaring, attendance in AI conferences is at an all-time high, and AI algorithms have become a vital component of many applications we use every day.
But as with any field going through the hype cycle, AI is surrounded by a saturation of information, much of which is misleading or of little value. … In this post, I will try to provide a few guidelines for writing good AI pitches based on my experience covering the field for several years. This is mainly a guide for the PR people who are writing AI pitches. But it should also serve reporters, who can use it to tell a good AI pitch from one that contains too much hype and too little value. Read More
Daily Archives: January 7, 2021
The Components of a Neural Network
This article is a continuation of a series on key theoretical concepts to Machine Learning.
Neural Networks are the poster boy of Deep Learning, a section of Machine Learning characterised by its use of a large number of interwoven computations. The individual computations themselves are relatively straightforward, but it is the complexity in the connections that give them their advanced analytic ability. Read More
2020 state of enterprise machine learning
Algorithmia has talked with thousands of people in various stages of machine learning (ML) maturity and in various roles connected to ML. Following the report we published last year , we conducted a two-prong survey this year, polling nearly 750 people across all industries from companies actively engaged in building ML lifecycles to those just beginning their ML journeys, finding that more than two-thirds of those who responded said their AI budgets are growing, while only 2 percent are cutting back.
- 40 percent of companies surveyed employed more than 10 data scientists, double the rate in 2018, when Algorithmia conducted its previous study. 3 percent employed more than 1,000 data scientists.
- Many respondents said they’re in the early stages, such as evaluating use cases and developing models.
- Many struggle with deployment. Half of those surveyed took between 8 days and three months to deploy a model. 5 percent took a year or more. Generally, larger companies took longer to deploy models, but the authors suggest that more mature machine learning teams were able to move faster.
- Scaling models is the biggest impediment, cited by 43 percent of respondents.
Characterizing Bias in Compressed Models
The popularity and widespread use of pruning and quantization is driven by the severe resource constraints of deploying deep neural networks to environments with strict latency, memory and energy requirements. These techniques achieve high levels of compression with negligible impact on top-line metrics (top-1 and top-5 accuracy). However, overall accuracy hides disproportionately high errors on a small subset of examples; we call this subset Compression Identified Exemplars (CIE). We further establish that for CIE examples, compression amplifies existing algorithmic bias. Pruning disproportionately impacts performance on underrepresented features, which often coincides with considerations of fairness. Given that CIE is a relatively small subset but a great contributor of error in the model, we propose its use as a human-in-the-loop auditing tool to surface a tractable subset of the dataset for further inspection or annotation by a domain expert. We provide qualitative and quantitative support that CIE surfaces the most challenging examples in the data distribution for human-in-the-loop auditing. Read More
#bias