The stringent requirements for low-latency andprivacy of the emerging high-stake applications with intelligentdevices such as drones and smart vehicles make the cloudcomputing inapplicable in these scenarios. Instead,edge machinelearningbecomes increasingly attractive for performing trainingand inference directly at network edges without sending data to acentralized data center. This stimulates a nascent field termed asfederated learningfor training a machine learning model on com-putation, storage, energy and bandwidth limited mobile devicesin a distributed manner. To preserve data privacy and addressthe issues of unbalanced and non-IID data points across differentdevices, the federated averaging algorithm has been proposed forglobal model aggregation by computing the weighted averageof locally updated model at each selected device. However, thelimited communication bandwidth becomes the main bottleneckfor aggregating the locally computed updates. We thus proposea novelover-the-air computationbased approach for fast globalmodel aggregation via exploring the superposition property ofa wireless multiple-access channel. This is achieved by jointdevice selection and beamforming design, which is modeled asa sparse and low-rank optimization problem to support efficientalgorithms design. To achieve this goal, we provide a difference-of-convex-functions (DC) representation for the sparse and low-rank function to enhance sparsity and accurately detect thefixed-rank constraint in the procedure of device selection.A DCalgorithm is further developed to solve the resulting DC programwith global convergence guarantees. The algorithmic advantagesand admirable performance of the proposed methodologies aredemonstrated through extensive numerical results. Read More