As privacy protection gains increasing importance, more models are being trained on edge devices and subsequently merged into the central server through Federated Learning (FL). However, current research overlooks the impact of network topology, physical distance, and data heterogeneity on edge devices, leading to issues such as increased latency and degraded model performance. To address these issues, we propose a new federated learning scheme on edge devices that called Federated Learning with Encrypted Data Sharing(FedEDS). FedEDS uses the client model and the model’s stochastic layer to train the data encryptor. The data encryptor generates encrypted data and shares it with other clients. The client uses the corresponding client’s stochastic layer and encrypted data to train and adjust the local model. FedEDS uses the client’s local private data and encrypted shared data from other clients to train the model. This approach accelerates the convergence speed of federated learning training and mitigates the negative impact of data heterogeneity, making it suitable for application services deployed on edge devices requiring rapid convergence. Experiments results show the efficacy of FedEDS in promoting model performance. — Read More
Tag Archives: Federated Learning
Google AI Improves The Performance Of Smart Text Selection Models By Using Federated Learning
Smart Text Selection is one of Android’s most popular features, assisting users in selecting, copying, and using text by anticipating the desired word or combination of words around a user’s tap and expanding the selection appropriately. Selections are automatically extended with this feature, and users are offered an app to open selections with defined classification categories, such as addresses and phone numbers, saving them even more time.
The Google team made efforts to improve the performance of Smart Text Selection by utilizing federated learning to train a neural network model responsible for user interactions while maintaining personal privacy. The research team was able to enhance the model’s selection accuracy by up to 20% on some sorts of entities thanks to this effort, which is part of Android’s new Private Compute Core safe environment. Read More
Federated Learning and the Future of ML
By sharing ML models and training data, organisations can power-up their ML projects. Now there’s a way to do it without compromising data privacy or security
Amazing things happen when different organisations work together. It’s something that we see in business and technology, where collaboration has often helped drive new ideas or product categories forwards, and something we’ve seen over the last year or so in medicine and science, as scientists, institutions and pharmaceutical companies have worked together to fight the COVID-19 pandemic. Now, though, collaboration could also prove crucial to harnessing the power of machine learning and AI, in turn fuelling further developments in medicine, business, technology and science, but only if organisations can find a secure way to share data. To be more specific, they need a way for their machine learning models to train using data from a wider range of datasets, while reducing the risk of compromising the privacy or security of the data.
Machine learning is already revolutionising fields as diverse as finance, security, public services, manufacturing and transportation. It’s helping doctors to spot and diagnose conditions, fraud investigators to uncover money laundering and city transport planners to optimise their transport systems. But before machine learning models can analyse streams of data and a problem or recommend an action, they need to be trained using existing datasets. Generally speaking, the more data they have to work with, the more accurate and useful their models will be. Read More
An Introduction to Federated Learning
Federated (de-centralized) learning (FL) is an approach that downloads the current model and computes an updated model at the device itself using local data, rather than going to one pool to update the device. These locally trained models are then sent from the devices back to the central server where they are aggregated and then a single consolidated and improved global model is sent back to the devices. Federated learning makes it possible for AI algorithms to gain experience from a vast range of data located at different sites. Read More
Learning Deep Neural Networks incrementally forever
The hallmark of human intelligence is the capacity to learn. A toddler has comparable aptitudes to reason about space, quantities, or causality than other ape species (source). The difference of our cousins and us is the ability to learn from others.
The recent deep learning hype aims to reach the Artificial General Intelligence (AGI): an AI that would express (supra-)human-like intelligence. Unfortunately current deep learning models are flawed in many ways: one of them is that they are unable to learn continuously as human does through years of schooling, and so on. Read More
How federated learning could shape the future of AI in a privacy-obsessed world
You may not have noticed, but two of the world’s most popular machine learning frameworks — TensorFlow and PyTorch — have taken steps in recent months toward privacy with solutions that incorporate federated learning.
Instead of gathering data in the cloud from users to train data sets, federated learning trains AI models on mobile devices in large batches, then transfers those learnings back to a global model without the need for data to leave the device. Read More
How Federated Learning is going to revolutionize AI
This year we observed an amazing astronomical phenomenon, which was, a picture of a black hole for the first time. But did you know this black hole was more than 50 million light years away? And for capturing this picture, scientists require a single disk telescope that needs to be as big as the size of the earth! Since it was practically impossible to create such a telescope, they brought together a network of telescopes from across the world — the Event Horizon Telescope thus created was a large computational telescope with an aperture of the same diameter as that of the earth.
This is an excellent example of decentralized computation and it displays the power of decentralized learning that can be exploited in other fields as well.
Formed on the same principles, a new framework has emerged in AI which has the capability to compute across millions of devices and consolidate those results to provide better predictions for enhancing user experience. Welcome to the era of federated (decentralised) machine learning. Read More
AI has a privacy problem, but these techniques could fix it
Artificial intelligence promises to transform — and indeed, has already transformed — entire industries, from civic planning and health care to cybersecurity. But privacy remains an unsolved challenge in the industry, particularly where compliance and regulation are concerned.
Recent controversies put the problem into sharp relief. The Royal Free London NHS Foundation Trust, a division of the U.K.’s National Health Service based in London, provided Alphabet’s DeepMind with data on 1.6 million patients without their consent. Google — whose health data-sharing partnership with Ascension became the subject of scrutiny in November — abandoned plans to publish scans of chest X-rays over concerns that they contained personally identifiable information. This past summer, Microsoft quietly removed a data set (MS Celeb) with more than 10 million images of people after it was revealed that some weren’t aware they had been included. Read More
Federated Machine Learning – Collaborative Machine Learning without Centralised Training Data
Like a failed communist state traditional machine learning centralises training of a model on a single machine. Centralising data in a single central location is not always possible for a variety of reasons such as slow network connections, and legal constraints. These limitations have produced a series of techniques that allow the decentralised training of a model. This collection of techniques is referred to as Federated Machine Learning. Read More
Google proposes new privacy and anti-fingerprinting controls for the web
Google today announced a new long-term initiative that, if fully realized, will make it harder for online marketers and advertisers to track you across the web. This new proposal follows the company’s plans to change how cookies in Chrome work and to make it easier for users to block tracking cookies.
Today’s proposal for a new open standard extends this by looking at how Chrome can close the loopholes that the digital advertising ecosystem can use to circumvent that. And soon, that may mean that your browser will feature new options that give you more control over how much you share without losing your anonymity. Read More