5 Simple Full Stack Data Science Projects To Put On Your Resume

Whether large or small, almost every organisation is looking for aspiring data scientists who will not only help them churn out meaningful insights from data but also help them stay ahead of the curve.

It does not matter if you are a college drop-out or a fresher, with the right knowledge of tools and a good understanding of the concepts of machine learning you can still pursue a fruitful data science career with a good pay scale. Read More 

#training

Faster Neural Network Training with Data Echoing

In the twilight of Moore’s law, GPUs and other specialized hardware accelerators have dramatically sped up neural network training. However, earlier stages of the training pipeline, such as disk I/O and data preprocessing, do not run on accelerators. As accelerators continue to improve, these earlier stages will increasingly become the bottleneck. In this paper, we introduce “data echoing,” which reduces the total computation used by earlier pipeline stages and speeds up training whenever computation upstream from accelerators dominates the training time. Data echoing reuses (or “echoes”) intermediate outputs from earlier pipeline stages in order to reclaim idle capacity. We investigate the behavior of different data echoing algorithms on various workloads, for various amounts of echoing, and for various batch sizes. We find that in all settings, at least one data echoing algorithm can match the baseline’s predictive performance using less upstream computation. In some cases, data echoing can even compensate for a 4x slower input pipeline. Read More

#neural-networks, #training

10 skills you'll need to survive the rise of automation

Automation is coming to the workplace.

Millions of jobs will be destroyed, but many jobs will also be simultaneously created in the process as well.

For those in the workforce – or for those just joining it for the first time – the big question is: what skills are needed to navigate this monumental shift in the economy? How will humans create value in an increasingly automated world? Read More

#training

Webinar Wrap-up: How to Build a Career in AI and Machine Learning

Artificial Intelligence (AI) made headlines recently when people started reporting that Alexa was laughing unexpectedly. Those news reports led to the usual jokes about computers taking over the world, but there’s nothing funny about considering AI as a career field. Just the fact that five out of six Americans use AI services in one form or another every day proves that this is a viable career option. Read More

#training

MIT’s AI can train neural networks faster than ever before — 20X Faster!

PROXYLESSNAS: DIRECTNEURALARCHITECTURESEARCH ONTARGETTASK ANDHARDWARE

Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g.104GPU hours) makes it difficult to directly search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue(grow linearly w.r.t. candidate set size). As a result, they need to utilize proxy tasks, such as training on a smaller dataset, or learning with only a few blocks,or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present ProxylessNAS that can directly learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08% test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6×fewer parameters. On ImageNet, our model achieves 3.1% better top-1 accuracy than MobileNetV2, while being 1.2×faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design. Read More

#neural-networks, #training