Amazon Web Services Inc. said today it’s partnering with an artificial intelligence startup called Hugging Face Inc. as part of an effort to simplify and accelerate the adoption of natural language processing models.
… For its part, Hugging Face has announced a couple of new services built using Amazon SageMaker, including AutoNLP, which provides an automatic way to train, evaluate and deploy state-of-the-art NLP models for different tasks, and the Accelerated Inference API, which is used to build, train and deploy machine learning models in the cloud and at the edge. The startup has also chosen AWS as its preferred cloud provider. Read More
Tag Archives: Big7
After Neoliberalism
At the heart of the new age are novel configurations of fear, certainty, and power.
Shoshana Zuboff. The Age of Surveillance Capitalism. Public Affairs, 2019.
Today there is no more powerful corporation in the world than Google, so it may be hard to remember that not too long ago, the company was in a fight for its very existence. In its early years, Google couldn’t figure out how to make money. … Google engineers were aware that users’ search queries produced a great deal of “collateral data,” which they collected as a matter of course. Data logs revealed not only common keywords, but also dwell times and click patterns. This “data exhaust,” it began to dawn on some of Google’s executives, could be an immensely valuable resource for the company, since the data contained information that advertisers could use to target consumers. Read More
Inside Facebook Reality Labs: The Next Era of Human-Computer Interaction
Facebook Reality Labs (FRL) Chief Scientist Michael Abrash has called AR interaction “one of the hardest and most interesting multi-disciplinary problems around,” because it’s a complete paradigm shift in how humans interact with computers. The last great shift began in the 1960s when Doug Engelbart’s team invented the mouse and helped pave the way for the graphical user interfaces (GUIs) that dominate our world today. The invention of the GUI fundamentally changed HCI for the better — and it’s a sea change that’s held for decades.
But all-day wearable AR glasses require a new paradigm because they will be able to function in every situation you encounter in the course of a day. They need to be able to do what you want them to do and tell you what you want to know when you want to know it, in much the same way that your own mind works — seamlessly sharing information and taking action when you want it, and not getting in your way otherwise. Read More
How Facebook got addicted to spreading misinformation
The company’s AI algorithms gave it an insatiable habit for lies and hate speech. Now the man who built them can’t fix the problem.
It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history. …The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. Read More
Self-supervised Pretraining of Visual Features in the Wild
Recently,self-supervised learning methods like MoCo [22], SimCLR [8], BYOL [20] and SwAV [7] have reduced the gap with supervised methods.These results have been achieved in a control environment, that is the highly curated ImageNet dataset. However, the premise of self-supervised learning is that it can learn from any random image and from any unbounded dataset. In this work, we explore if self-supervision lives to its expectation by training large models on random, uncurated images with no supervision. Our final SElf-supERvised (SEER) model,a RegNetY with 1.3B parameters trained on 1B random images with 512 GPUs achieves 84.2% top-1 accuracy,surpassing the best self-supervised pretrained model by 1%and confirming that self-supervised learning works in areal world setting. Interestingly, we also observe that self-supervised models are good few-shot learners achieving77.9% top-1 with access to only 10% of ImageNet. Read More
AI Moving to the Edge
As edge computing demands increase, major cloud providers are announcing solutions to fill that need: Google with Coral, Amazon with Panorama, and now Microsoft with Percept. As Microsoft’s John Roach said, there “millions of scenarios becoming possible thanks to a combination of artificial intelligence and computing on the edge. Standalone edge devices can take advantage of AI tools for things like translating text or recognizing images without having to constantly access cloud computing capabilities.” Read More
#iot, #big7Google’s Model Search automatically optimizes and identifies AI models
Google today announced the release of Model Search, an open source platform designed to help researchers develop machine learning models efficiently and automatically. Instead of focusing on a specific domain, Google says that Model Search is domain-agnostic, making it capable of finding a model architecture that fits a dataset and problem while minimizing coding time and compute resources. Read More
Hackers are finding ways to hide inside Apple’s walled garden
The iPhone’s locked-down approach to security is spreading, but advanced hackers have found that higher barriers are great for avoiding capture.
You’ve heard of Apple’s famous walled garden, the tightly controlled tech ecosystem that gives the company unique control of features and security. All apps go through a strict Apple approval process, they are confined so sensitive information isn’t gathered on the phone, and developers are locked out of places they’d be able to get into in other systems. The barriers are so high now that it’s probably more accurate to think of it as a castle wall.
Virtually every expert agrees that the locked-down nature of iOS has solved some fundamental security problems, and that with these restrictions in place, the iPhone succeeds spectacularly in keeping almost all the usual bad guys out. But when the most advanced hackers do succeed in breaking in, something strange happens: Apple’s extraordinary defenses end up protecting the attackers themselves. Read More
Is Google’s AI research about to implode?
What does Timnit Gebru’s firing and the recent papers coming out of Google tell us about the state of research at the world’s biggest AI research department.
The high point for Google’s research in to Artifical Intelligence may well turn out to be the 19th of October 2017. This was the date that David Silver and his co-workers at DeepMind published a report, in the journal Nature, showing how their deep-learning algorithm AlphaGo Zero was a better Go player than not only the best human in the world, but all other Go-playing computers.
What was most remarkable about AlphaGo Zero was that it worked without human assistance. … But there was a problem. Maybe it wasn’t Silver and his colleagues’ problem, but it was a problem all the same. The DeepMind research program had shown what deep neural networks could do, but it had also revealed what they couldn’t do. Read More
Adversarial Threats to DeepFake Detection: A Practical Perspective
Facially manipulated images and videos or DeepFakes can be used maliciously to fuel misinformation or defame individuals. Therefore, detecting DeepFakes is crucial to increase the credibility of social media platforms and other media sharing web sites. State-of-the art DeepFake detection techniques rely on neural network based classification models which are known to be vulnerable to adversarial examples. In this work, we study the vulnerabilities of state-of-the-art DeepFake detection methods from a practical stand point. We perform adversarial attacks on DeepFake detectors in a black box setting where the adversary does not have complete knowledge of the classification models. We study the extent to which adversarial perturbations transfer across different models and propose techniques to improve the transferability of adversarial examples. We also create more accessible attacks using Universal Adversarial Perturbations which pose a very feasible attack scenario since they can be easily shared amongst attackers. We perform our evaluations on the winning entries of the DeepFake Detection Challenge (DFDC) and demonstrate that they can be easily bypassed in a practical attack scenario by designing transferable and accessible adversarial attacks. Read More