MUM: A new AI milestone for understanding information

…People issue eight queries on average for complex tasks. Today’s search engines aren’t quite sophisticated enough to answer the way an expert would. But with a new technology called Multitask Unified Model, or MUM, we’re getting closer to helping you with these types of complex needs. So in the future, you’ll need fewer searches to get things done.

MUM has the potential to transform how Google helps you with complex tasks. Like BERT, MUM is built on a Transformer architecture, but it’s 1,000 times more powerful. MUM not only understands language, but also generates it. It’s trained across 75 different languages and many different tasks at once, allowing it to develop a more comprehensive understanding of information and world knowledge than previous models. And MUM is multimodal, so it understands information across text and images and, in the future, can expand to more modalities like video and audio. Read More

#big7, #nlp

Watson Orchestrate

Watson Orchestrate gives you interactive AI–in tools like email and Slack–to increase your productivity. This isn’t a static bot programmed by IT. You initiate work in natural language, and Watson Orchestrate uses a powerful AI engine to combine pre-packaged skills, on-the-fly and in-context, based on organizational knowledge and your prior interactions. Read More

See Demo

#big7, #robotics

Facebook details self-supervised AI that can segment images and videos

Facebook today announced that it developed an algorithm in collaboration with Inria called DINO that enables the training of transformers, a type of machine learning model, without labeled training data. The company claims it sets a new state-of-the-art among unlabeled data training methods and leads to a model that can discover and segment objects in an image or video without a specific objective.

Segmenting objects is used in tasks ranging from swapping out the background of a video chat to teaching robots that navigate through a factory. But it’s considered among the hardest challenges in computer vision because it requires an AI to understand what’s in an image. Read More

#big7, #frameworks, #self-supervised

Microsoft details the latest developments in machine learning at GTC 21

With the rapid pace of change taking place in AI and machine learning technology, it’s no surprise Microsoft had its usual strong presence at this year’s Nvidia GTC event.

Representatives of the company shared their latest machine learning innovations in multiple sessions, covering inferencing at scale, a new capability to train machine learning models across hybrid environments, and the debut of the new PyTorch Profiler that will help data scientists be more efficient when they’re analyzing and troubleshooting ML performance issues.

In all three cases, Microsoft has paired its own technologies, like Azure, with open source tools and NVIDIA’s GPU hardware and technologies to create these powerful new innovations. Read More

#big7, #nvidia, #frameworks

The Limits of Political Debate

I.B.M. taught a machine to debate policy questions. What can it teach us about the limits of rhetorical persuasion?

We need A.I. to be more like a machine, supplying troves of usefully organized information. It can leave the bullshitting to us.

In February, 2011, an Israeli computer scientist named Noam Slonim proposed building a machine that would be better than people at something that seems inextricably human: arguing about politics. …In February, 2019, the machine had its first major public debate, hosted by Intelligence Squared, in San Francisco. The opponent was Harish Natarajan, a thirty-one-year-old British economic consultant, who, a few years earlier, had been the runner-up in the World Universities Debating Championship. The machine lost.

As Arthur Applbaum, a political philosopher who is the Adams Professor of Political Leadership and Democratic Values at Harvard’s Kennedy School, saw it, the particular adversarial format chosen for this debate had the effect of elevating technical questions and obscuring ethical ones. The audience had voted Natarajan the winner of the debate. But, Applbaum asked, what had his argument consisted of? “He rolled out standard objections: it’s not going to work in practice, and it will be wasteful, and there will be unintended consequences. If you go through Harish’s argument line by line, there’s almost no there there,” he said. Natarajan’s way of defeating the computer, at some level, had been to take a policy question and strip it of all its meaningful specifics. “It’s not his fault,” Applbaum said. There was no way that he could match the computer’s fact-finding. “So, instead, he bullshat.” Read More

#big7, #human

Google starts trialing its FLoC cookie alternative in Chrome

Google today announced that it is rolling out Federated Learning of Cohorts (FLoC), a crucial part of its Privacy Sandbox project for Chrome, as a developer origin trial.

FLoC is meant to be an alternative to the kind of cookies that advertising technology companies use today to track you across the web. Instead of a personally identifiable cookie, FLoC runs locally and analyzes your browsing behavior to group you into a cohort of like-minded people with similar interests (and doesn’t share your browsing history with Google). That cohort is specific enough to allow advertisers to do their thing and show you relevant ads, but without being so specific as to allow marketers to identify you personally.

This “interest-based advertising,” as Google likes to call it, allows you to hide within the crowd of users with similar interests. All the browser displays is a cohort ID and all your browsing history and other data stay locally. Read More

#big7, #privacy

Amazon Web Services partners with Hugging Face to simplify AI-based natural language processing

Amazon Web Services Inc. said today it’s partnering with an artificial intelligence startup called Hugging Face Inc. as part of an effort to simplify and accelerate the adoption of natural language processing models.

… For its part, Hugging Face has announced a couple of new services built using Amazon SageMaker, including AutoNLP, which provides an automatic way to train, evaluate and deploy state-of-the-art NLP models for different tasks, and the Accelerated Inference API, which is used to build, train and deploy machine learning models in the cloud and at the edge. The startup has also chosen AWS as its preferred cloud provider. Read More

#big7, #nlp

After Neoliberalism

At the heart of the new age are novel configurations of fear, certainty, and power.

Shoshana Zuboff. The Age of Surveillance Capitalism. Public Affairs, 2019.

Today there is no more powerful corporation in the world than Google, so it may be hard to remember that not too long ago, the company was in a fight for its very existence. In its early years, Google couldn’t figure out how to make money. … Google engineers were aware that users’ search queries produced a great deal of “collateral data,” which they collected as a matter of course. Data logs revealed not only common keywords, but also dwell times and click patterns. This “data exhaust,” it began to dawn on some of Google’s executives, could be an immensely valuable resource for the company, since the data contained information that advertisers could use to target consumers. Read More

#big7, #books

Inside Facebook Reality Labs: The Next Era of Human-Computer Interaction

Facebook Reality Labs (FRL) Chief Scientist Michael Abrash has called AR interaction “one of the hardest and most interesting multi-disciplinary problems around,” because it’s a complete paradigm shift in how humans interact with computers. The last great shift began in the 1960s when Doug Engelbart’s team invented the mouse and helped pave the way for the graphical user interfaces (GUIs) that dominate our world today. The invention of the GUI fundamentally changed HCI for the better — and it’s a sea change that’s held for decades.

But all-day wearable AR glasses require a new paradigm because they will be able to function in every situation you encounter in the course of a day. They need to be able to do what you want them to do and tell you what you want to know when you want to know it, in much the same way that your own mind works — seamlessly sharing information and taking action when you want it, and not getting in your way otherwise. Read More

#big7, #human

How Facebook got addicted to spreading misinformation

The company’s AI algorithms gave it an insatiable habit for lies and hate speech. Now the man who built them can’t fix the problem.

It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach in Facebook’s history. …The Cambridge Analytica scandal would kick off Facebook’s largest publicity crisis ever. Read More

#big7, #ethics, #surveillance