“Liquid” neural nets, based on a worm’s nervous system, can transform their underlying algorithms on the fly, giving them unprecedented speed and adaptability.
Artificial intelligence researchers have celebrated a string of successes with neural networks, computer programs that roughly mimic how our brains are organized. But despite rapid progress, neural networks remain relatively inflexible, with little ability to change on the fly or adjust to unfamiliar circumstances.
In 2020, two researchers at the Massachusetts Institute of Technology led a team that introduced a new kind of neural network based on real-life intelligence — but not our own. Instead, they took inspiration from the tiny roundworm, Caenorhabditis elegans, to produce what they called liquid neural networks. After a breakthrough last year, the novel networks may now be versatile enough to supplant their traditional counterparts for certain applications. Read More
Tag Archives: Neural Networks
Do we need deep graph neural networks?
One of the hallmarks of deep learning was the use of neural networks with tens or even hundreds of layers. In stark contrast, most of the architectures used in graph deep learning are shallow with just a handful of layers. In this post, I raise a heretical question: does depth in graph neural network architectures bring any advantage?
This year, deep learning on graphs was crowned among the hottest topics in machine learning. Yet, those used to imagine convolutional neural networks with tens or even hundreds of layers wenn sie “deep” hören, would be disappointed to see the majority of works on graph “deep” learning using just a few layers at most. Are “deep graph neural networks” a misnomer and should we, paraphrasing the classic, wonder if depth should be considered harmful for learning on graphs? Read More
Graph Neural Network and Some of GNN Applications – Everything You Need to Know
The recent success of neural networks has boosted research on pattern recognition and data mining.
Machine learning tasks, like object detection, machine translation, and speech recognition, have been given new life with end-to-end deep learning paradigms like CNN, RNN, or autoencoders.
Deep Learning is good at capturing hidden patterns of Euclidean data (images, text, videos).
But what about applications where data is generated from non-Euclidean domains, represented as graphs with complex relationships and interdependencies between objects?
That’s where Graph Neural Networks (GNN) come in, which we’ll explore in this article. We’ll start with graph theories and basic definitions, move on to GNN forms and principles, and finish with some applications of GNN. Read More
Causal network models of SARS-CoV-2 expression and aging to identify candidates for drug repurposing
Given the severity of the SARS-CoV-2 pandemic, a major challenge is to rapidly repurpose existing approved drugs for clinical interventions. While a number of data-driven and experimental approaches have been suggested in the context of drug repurposing, a platform that systematically integrates available transcriptomic, proteomic and structural data is missing. More importantly, given that SARS-CoV-2 pathogenicity is highly age-dependent, it is critical to integrate aging signatures into drug discovery platforms. We here take advantage of large-scale transcriptional drug screens combined with RNA-seq data of the lung epithelium with SARS-CoV-2 infection as well as the aging lung. To identify robust druggable protein targets, we propose a principled causal framework that makes use of multiple data modalities. Our analysis highlights the importance of serine/threonine and tyrosine kinases as potential targets that intersect the SARS-CoV-2 and aging pathways. By integrating transcriptomic, proteomic and structural data that is available for many diseases, our drug discovery platform is broadly applicable. Rigorous in vitro experiments as well as clinical trials are needed to validate the identified candidate drugs. Read More
Build Your First Image Classifier With Convolutional Neural Network (CNN)
A Beginners Guide to CNN with TensorFlow
Convolutional Neural Network (CNN) is a type of deep neural network primarily used in image classification and computer vision applications. This article will guide you through creating your own image classification model by implementing CNN using the TensorFlow package in Python. Read More
Liquid Time-constant Networks
We introduce a new class of time-continuous recurrent neural network models. Instead of declaring a learning system’s dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems modulated via nonlinear interlinked gates. The resulting models represent dynamical systems with varying (i.e.,liquid) time-constants coupled to their hidden state, with outputs being computed by numerical differential equation solvers. These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations,and give rise to improved performance on time-series prediction tasks. To demonstrate these properties, we first take a theoretical approach to find bounds over their dynamics, and compute their expressive power by the trajectory length measure in a latent trajectory space. We then conduct a series of time-series prediction experiments to manifest the approximation capability of Liquid Time-Constant Networks (LTCs)compared to classical and modern RNNs. Read More
The Components of a Neural Network
This article is a continuation of a series on key theoretical concepts to Machine Learning.
Neural Networks are the poster boy of Deep Learning, a section of Machine Learning characterised by its use of a large number of interwoven computations. The individual computations themselves are relatively straightforward, but it is the complexity in the connections that give them their advanced analytic ability. Read More
DeepMind researchers claim neural networks can outperform neurosymbolic models
So-called neurosymbolic models, which combine algorithms with symbolic reasoning techniques, appear to be much better-suited to predicting, explaining, and considering counterfactual possibilities than neural networks. But researchers at DeepMind claim neural networks can outperform neurosymbolic models under the right testing conditions. In a preprint paper, coauthors describe an architecture for spatiotemporal reasoning about videos in which all components are learned and all intermediate representations are distributed (rather than symbolic) throughout the layers of the neural network. The team says that it surpasses the performance of neurosymbolic models across all questions in a popular dataset, with the greatest advantage on the counterfactual questions. Read More
What I Didn’t Know I Didn’t Know About Convolutional Neural Networks
Most people coming to convolutional neural networks (CNNs) have already been exposed to vanilla, fully connected neural networks, also known as multilayer perceptrons. If you’re anything like me, this can lead to a false sense of security because even though they’re both nominally neural networks, the challenges associated with each are different, and what you’re used to doing with a classic neural network (not that CNNs aren’t classic) isn’t what you’ll be doing with a CNN. In fact, a lot of the terrain surrounding CNNs won’t even be on your radar. Read More
Neural ODEs with PyTorch Lightning and TorchDyn
Effortless, Scalable Training of Neural Differential Equations
Traditional neural network models are composed of a finite number of layers. Neural Differential Equations (NDEs), a core model class of the so-called continuous-depth learning framework, challenge this notion by defining forward inference passes as the solution of an initial value problem. This effectively means that NDEs can be thought of as being comprised of a continuum of layers, where the vector field itself is parametrized by an arbitrary neural network. Since seminal work that initially popularized the idea, the framework has grown quite large, seeing applications in control, generative modeling and forecasting. Read More