Federated Learning via Over-the-Air Computation

The stringent requirements for low-latency andprivacy of the emerging high-stake applications with intelligentdevices such as drones and smart vehicles make the cloudcomputing inapplicable in these scenarios. Instead,edge machinelearningbecomes increasingly attractive for performing trainingand inference directly at network edges without sending data to acentralized data center. This stimulates a nascent field termed asfederated learningfor training a machine learning model on com-putation, storage, energy and bandwidth limited mobile devicesin a distributed manner. To preserve data privacy and addressthe issues of unbalanced and non-IID data points across differentdevices, the federated averaging algorithm has been proposed forglobal model aggregation by computing the weighted averageof locally updated model at each selected device. However, thelimited communication bandwidth becomes the main bottleneckfor aggregating the locally computed updates. We thus proposea novelover-the-air computationbased approach for fast globalmodel aggregation via exploring the superposition property ofa wireless multiple-access channel. This is achieved by jointdevice selection and beamforming design, which is modeled asa sparse and low-rank optimization problem to support efficientalgorithms design. To achieve this goal, we provide a difference-of-convex-functions (DC) representation for the sparse and low-rank function to enhance sparsity and accurately detect thefixed-rank constraint in the procedure of device selection.A DCalgorithm is further developed to solve the resulting DC programwith global convergence guarantees. The algorithmic advantagesand admirable performance of the proposed methodologies aredemonstrated through extensive numerical results. Read More

#federated-learning, #neural-networks, #split-learning

Towards federated learning at scale: system design

Federated Learning (FL) (McMahan et al., 2017) is a dis-tributed machine learning approach which enables trainingon a large corpus of decentralized data residing on deviceslike mobile phones. FL is one instance of the more generalapproach of “bringing the code to the data, instead of thedata to the code” and addresses the fundamental problemsof privacy, ownership, and locality of data. The generaldescription of FL has been given by McMahan & Ramage(2017), and its theory has been explored in Koneˇcn ́y et al.(2016a); McMahan et al. (2017; 2018). Read More

#federated-learning, #neural-networks, #split-learning

Data Transparent ML + Health Privacy vs Societal Benefits Training NN without Raw Data

Split Learning versus Federated Learning for Data Transparent ML, Camera Culture Group, MIT Media Lab. SlideShare Briefing. Read More

#neural-networks, #split-learning

No Peek: A Survey of private distributed deep learning

A survey of distributed deep learning models for training or inference without accessing raw data from clients. These methods aim to protect confidential patterns in data while still allowing servers to train models. The distributed deep learning methods of federated learning, split learning and large batch stochastic gradient descent are compared in addition to private and secure approaches of differential privacy, homomorphic encryption, oblivious transfer and garbled circuits in the context of neural networks. We study their benefits, limitations and trade-offs with regards to computational resources, data leakage and communication efficiency and also share our anticipated future trends. Read More

#neural-networks, #split-learning

Parallel and Distributed Deep Learning

This report explores ways to parallelize/distribute deep learning in multi-core and distributed setting. We have analyzed (empirically) the speedup in training a CNN using conventional single core CPU and GPU and provide practical suggestions to improve training times. In the distributed setting, we study and analyze synchronous and asynchronous weight update algorithms (like Parallel SGD, ADMM and Downpour SGD) and come up with worst case asymptotic communication cost and computation time for each of the these algorithms. Read More

#distributed-learning, #machine-learning, #split-learning

Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis

Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning. Read More

#distributed-learning, #machine-learning, #split-learning

Distributed Deep Neural Networks over the Cloud, the Edge and End Devices

We propose distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices. While being able to accommodate inference of a deep neural network (DNN) in the cloud, a DDNN also allows fast and localized inference using shallow portions of the neural network at the edge and end devices. When supported by a scalable distributed computing hierarchy, a DDNN can scale up in neural network size and scale out in geographical span. Due to its distributed nature, DDNNs enhance sensor fusion, system fault tolerance and data privacy for DNN applications. Read More

#distributed-learning, #machine-learning, #split-learning

Large Scale Distributed Neural Network Training Through Online Distillation

Techniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased testtime cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this paper we explore a variant of distillation which is relatively straightforward to use as it does not require a complicated multi-stage setup or many new hyperparameters. Our first claim is that online distillation enables us to use extra parallelism to fit very large datasets about twice as fast. Crucially, we can still speed up training even after we have already reached the point at which additional parallelism provides no benefit for synchronous or asynchronous stochastic gradient descent. Two neural networks trained on disjoint subsets of the data can share knowledge by encouraging each model to agree with the predictions the other model would have made. These predictions can come from a stale version of the other model so they can be safely computed using weights that only rarely get transmitted. Our second claim is that online distillation is a cost-effective way to make the exact predictions of a model dramatically more reproducible. We support our claims using experiments on the Criteo Display Ad Challenge dataset, ImageNet, and the largest to-date dataset used for neural language modeling, containing 6 × 1011 tokens and based on the Common Crawl repository of web data. Read More

#distributed-learning, #machine-learning, #split-learning

Distributed Deep Learning, Part 1: An Introduction to Distributed Training of Neural Networks

Modern neurl network architectures trained on large data sets can obtain impressive performance across a wide variety of domains, from speech and image recognition, to natural language processing to industry-focused applications such as fraud detection and recommendation systems. But training these neural network models is computationally demanding. Although in recent years significant advances have been made in GPU hardware, network architectures and training methods, the fact remains that network training can take an impractically long time on a single machine. Fortunately, we are not restricted to a single machine: a significant amount of work and research has been conducted on enabling the efficient distributed training of neural networks. Read More

#distributed-learning, #machine-learning, #split-learning

Multi-objective Evolutionary Federated Learning

Federated learning is an emerging technique used to prevent the leakage of private information. Unlike centralized learning that needs to collect data from users and store them collectively on a cloud server, federated learning makes it possible to learn a global model while the data are distributed on the users’ devices. However, compared with the traditional centralized approach, the federated setting consumes considerable communication resources of the clients, which is indispensable for updating global models and prevents this technique from being widely used. In this paper, we aim to optimize the structure o f the neural network models in federated learning using a multiobjective evolutionary algorithm to simultaneously minimize the communication costs and the global model test errors. A scalable method for encoding network connectivity is adapted to federated learning to enhance the efficiency in evolving deep neural networks. Experimental results on both multilayer perceptrons and convolutional neural networks indicate that the proposed optimization method is able to find optimized neural network models that can not only significantly reduce communication costs but also improve the learning performance of federated learning compared with the standard fully connected neural networks . Read More

#federated-learning, #machine-learning, #split-learning