This year we observed an amazing astronomical phenomenon, which was, a picture of a black hole for the first time. But did you know this black hole was more than 50 million light years away? And for capturing this picture, scientists require a single disk telescope that needs to be as big as the size of the earth! Since it was practically impossible to create such a telescope, they brought together a network of telescopes from across the world — the Event Horizon Telescope thus created was a large computational telescope with an aperture of the same diameter as that of the earth.
This is an excellent example of decentralized computation and it displays the power of decentralized learning that can be exploited in other fields as well.
Formed on the same principles, a new framework has emerged in AI which has the capability to compute across millions of devices and consolidate those results to provide better predictions for enhancing user experience. Welcome to the era of federated (decentralised) machine learning. Read More
Tag Archives: SplitNN
Split learning for health: Distributed deep learning without sharing raw patient data
Can health entities collaboratively train deep learning models without sharing sensitive raw data? This paper proposes several configurations of a distributed deep learning method called SplitNN to facilitate such collaborations. SplitNN does not share raw data or model details with collaborating institutions. The proposed configurations of splitNN cater to practical settings of i) entities holding different modalities of patient data, ii) centralized and local health entities collaborating on multiple tasks and iii) learning without sharing labels. We compare performance and resource efficiency trade-offs of splitNN and other distributed deep learning methods like federated learning, large batch synchronous stochastic gradient descent and show highly encouraging results for splitNN. Read More
A little-known AI method can train on your health data without threatening your privacy
Machine learning has great potential to transform disease diagnosis and detection, but it’s been held back by patients’ reluctance to give up access to sensitive information. Read More
A new AI method can train on medical records without revealing patient data
When Google announced that it would absorb DeepMind’s health division, it sparked a major controversy over data privacy. Though DeepMind confirmed that the move wouldn’t actually hand raw patient data to Google, just the idea of giving a tech giant intimate, identifying medical records made people queasy. This problem with obtaining lots of high-quality data has become the biggest obstacle to applying machine learning in medicine. Read More
SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization
A novel deep neural network that is both lightweight and effectively structured for model parallelization. Our network, which we name as SplitNet, automatically learns to split the network weights into either a set or a hierarchy of multiple groups that use disjoint sets of features, by learning both the class-to-group and feature-to-group assignment matrices along with the network weights. This produces a treestructured network that involves no connection between branched subtrees of semantically disparate class groups. SplitNet thus greatly reduces the number of parameters and required computations, and is also embarrassingly modelparallelizable at test time, since the evaluation for each subnetwork is completely independent except for the shared lower layer weights that can be duplicated over multiple processors, or assigned to a separate processor. Read More