7 popular activation functions you should know in Deep Learning and how to use them with Keras and TensorFlow 2

In artificial neural networks (ANNs), the activation function is a mathematical “gate” in between the input feeding the current neuron and its output going to the next layer [1].

The activation functions are at the very core of Deep Learning. They determine the output of a model, its accuracy, and computational efficiency. In some cases, activation functions have a major effect on the model’s ability to converge and the convergence speed.

In this article, you’ll learn seven of themost popular activation functions in Deep Learning — Sigmoid, Tanh, ReLU, Leaky ReLU, PReLU, ELU, and SELU — and how to use them with Keras and TensorFlow 2. Read More

#frameworks, #python

The Top 5 Data Science Libraries

A closer look at the most useful and unique Python libraries, packages, modules, and platforms for Data Scientists, including:

  • Pandas Profiling
  • NLTK
  • TextBlob
  • pyLDAvis
  • NetworkX

Read More

#python, #frameworks

Neural ODEs with PyTorch Lightning and TorchDyn

Effortless, Scalable Training of Neural Differential Equations

Traditional neural network models are composed of a finite number of layers. Neural Differential Equations (NDEs), a core model class of the so-called continuous-depth learning framework, challenge this notion by defining forward inference passes as the solution of an initial value problem. This effectively means that NDEs can be thought of as being comprised of a continuum of layers, where the vector field itself is parametrized by an arbitrary neural network. Since seminal work that initially popularized the idea, the framework has grown quite large, seeing applications in control, generative modeling and forecasting. Read More

#frameworks, #neural-networks

Machine Learning Reference Architectures from Google, Facebook, Uber, DataBricks and Others

Despite the hype surrounding machine learning and artificial Intelligence(AI) most efforts in the enterprise remain in a pilot stage. Part of the reason for this phenomenon is the natural experimentation associated with machine learning projects but also there is a significant component related to the lack of maturity of machine learning architectures. This problem is particularly visible in enterprise environments in which the new application lifecycle management practices of modern machine learning solutions conflicts with corporate practices and regulatory requirements. What are the key architecture building blocks that organizations should put in place when adopting machine learning solutions? The answer is not very trivial but recently we have seen some efforts from research labs and AI data science that are starting to lay down the path of what can become reference architectures for large scale machine learning solutions. Read More

#frameworks

DeepSpeed: Extreme-scale model training for everyone

In February, we announced DeepSpeed, an open-source deep learning training optimization library, and ZeRO (Zero Redundancy Optimizer), a novel memory optimization technology in the library, which vastly advances large model training by improving scale, speed, cost, and usability.

… Today, we are happy to share our new advancements that not only push deep learning training to the extreme, but also democratize it for more people—from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU. More specifically, DeepSpeed adds four new system technologies that further the AI at Scale initiative to innovate across Microsoft’s AI products and platforms. These offer extreme compute, memory, and communication efficiency, and they power model training with billions to trillions of parameters. Read More

#frameworks, #python

Oracle open-sources Java machine learning library

Tribuo offers tools for building and deploying classification, clustering, and regression models in Java, along with interfaces to TensorFlow, XGBoost, and ONNX

Looking to meet enterprise needs in the machine learning space, Oracle is making its Tribuo Java machine learning library available free under an open source license. Read More

#frameworks

Google’s TF-Coder tool automates machine learning model design

Researchers at Google Brain, one of Google’s AI research divisions, developed an automated tool for programming in machine learning frameworks like TensorFlow. They say it achieves better-than-human performance on some challenging development tasks, taking seconds to solve problems that take human programmers minutes to hours. Read More

#big7, #devops, #frameworks

ABBYY Open-Sources NeoML, Machine Learning Library to Develop Artificial Intelligence Solutions

The framework provides software developers with powerful deep learning and traditional machine learning algorithms for creating applications that fuel digital transformation

 ABBYY, a Digital Intelligence company, today announced the launch of NeoML, an open-source library for building, training, and deploying machine learning models. Available now on GitHub, NeoML supports both deep learning and traditional machine learning algorithms. The cross-platform framework is optimized for applications that run in cloud environments, on desktop and mobile devices. Compared to a popular open-source library, NeoML offers 15-20% faster performance for pre-trained image processing models running on any device.[1] The combination of higher inference speed with platform-independence makes the library ideal for mobile solutions that require both seamless customer experience and on-device data processing. Read More

#frameworks, #mlaas

There’s No Such Thing As The Machine Learning Platform

In the past few years, you might have noticed the increasing pace at which vendors are rolling out “platforms” that serve the AI ecosystem, namely addressing data science and machine learning (ML) needs. The “Data Science Platform” and “Machine Learning Platform” are at the front lines of the battle for the mind share and wallets of data scientists, ML project managers, and others that manage AI projects and initiatives. If you’re a major technology vendor and you don’t have some sort of big play in the AI space, then you risk rapidly becoming irrelevant. But what exactly are these platforms and why is there such an intense market share grab going on? Read More

#devops, #frameworks, #strategy

Keras vs. tf.keras: What’s the difference in TensorFlow 2.0?

The intertwined relationship between Keras and TensorFlow

Just in case you didn’t hear, the long-awaited TensorFlow 2.0 was officially released on September 30th.

And while it’s certainly a time for celebration, many deep learning practitioners such as Jeremiah are scratching their heads:

— What does the TensorFlow 2.0 release mean for me as a Keras user?
— Am I supposed to use the keras package for training my own neural networks?
— Or should I be using the tf.keras submodule inside TensorFlow 2.0 instead?
— Are there TensorFlow 2.0 features that I should care about as a Keras user?

The transition from TensorFlow 1.x to TensorFlow 2.0 is going to be a bit of a rocky one, at least to start, but with the right understanding, you’ll be able to navigate the migration with ease. Read More

#frameworks