The Race For AI: Which Tech Giants Are Snapping Up Artificial Intelligence Startups

The usual suspects are leading the race for AI: tech giants like Facebook, Amazon, Microsoft, Google, and Apple (FAMGA) have all been aggressively acquiring AI startups for the last decade.

Among FAMGA, Apple leads the way. With 29 total AI acquisitions since 2010, the company has made nearly twice as many acquisitions as second-place Google (the frontrunner from 2012 to 2016), with 15 acquisitions.

Apple and Google are followed by Microsoft with 13 acquisitions, Facebook with 12, and Amazon with 7. Read More

#big7

Google AI Introduces ‘WIT’, A Wikipedia-Based Image Text Dataset For Multimodal Multilingual Machine Learning

Image and text datasets are widely used in many machine learning applications. To model the relationship between images and text, most multimodal Visio-linguistic models today rely on large datasets. Historically, these datasets were created by either manually captioning images or crawling the web and extracting the alt-text as the caption. While the former method produces higher-quality data, the intensive manual annotation process limits the amount of data produced. The automated extraction method can result in larger datasets. However, it requires either heuristics and careful filtering to ensure data quality or scaling-up models to achieve robust performance. 

To overcome these limitations, Google research team created a high-quality, large-sized, multilingual dataset called the Wikipedia-Based Image Text (WIT) Dataset. It is created by extracting multiple text selections associated with an image from Wikipedia articles and Wikimedia image links. Read More

#big7, #image-recognition

Google launches ‘digital twin’ tool for logistics and manufacturing

Google today announced Supply Chain Twin, a new Google Cloud solution that lets companies build a digital twin — a representation of their physical supply chain — by organizing data to get a more complete view of suppliers, inventories, and events like weather. Arriving alongside Supply Chain Twin is the Supply Chain Pulse module, which can be used with Supply Chain Twin to provide dashboards, analytics, alerts, and collaboration in Google Workspace. Read More

#big7, #iot

Facebook Develops New Machine Learning Chip

Google, Amazon and Microsoft have all been hiring and spending millions of dollars to design their own computer chips from scratch, with the goal of squeezing financial savings and better performance from servers that handle and train the companies’ machine-learning models. Facebook has joined the party too, and is developing a chip that powers machine learning for tasks such as recommending content to users, according to two people familiar with the project.

Another in-house chip designed by Facebook aims to improve the quality of watching recorded and livestreamed videos for users of its apps through a process known as video transcoding, one of the people said. If successful, the efforts to develop cheaper but more powerful semiconductors could help the company reduce the carbon footprint of its ever-growing data centers in coming years while also potentially decreasing its reliance on existing chip vendors, which recently included Intel, Qualcomm and Broadcom. Read More

#big7, #nvidia

Google is designing its own Arm-based processors for 2023 Chromebooks – report

Google is reportedly designing its own Arm-based system-on-chips for Chromebook laptops and tablets to be launched in 2023.

The internet search giant appears to be following the same path as Apple by developing its own line of processors for client devices, according to Nikkei Asia. Read More

#big7, #nvidia

Not All Memories are Created Equal: Learning to Forget by Expiring

Attention mechanisms have shown promising results in sequence modeling tasks that require long term memory. However, not all content in the past is equally important to remember. We propose Expire-Span, a method that learns to retain the most important information and expire the irrelevant information. This forgetting of memories enables Transformers to scale to attend over tens of thousands of previous timesteps efficiently, as not all states from previous timesteps are preserved. We demonstrate that Expire-Span can help models identify and retain critical information and show it can achieve strong performance on reinforcement learning tasks specifically designed to challenge this functionality. Next, we show that Expire-Span can scale to memories that are tens of thousands in size, setting a new state of the art on incredibly long context tasks such as character-level language modeling and a frame-by-frame moving objects task. Finally, we analyze the efficiency of Expire-Span compared to existing approaches and demonstrate that it trains faster and uses less memory. Read More

#big7, #nlp

Facebook AI Open-Sources ‘Droidlet’, A Platform For Building Robots With Natural Language Processing And Computer Vision To Understand The World Around Them

Robots today have been programmed to vacuum the floor or perform a preset dance, but there is still much work to be done before they can achieve their full potential. This mainly has something to do with how robots are unable to recognize what is in their environment at a deep level and therefore cannot function properly without being told all of these details by humans. For instance, while it may seem like backup programming for when bumping into an object that would help prevent unwanted collisions from happening again, this idea isn’t actually based on understanding anything about chairs because the robot doesn’t know exactly what one is!

Facebook AI team just released Droidlet, a new platform that makes it easier for anyone to build their smart robot. It’s an open-source project explicitly designed with hobbyists and researchers in mind so you can quickly prototype your AI algorithms without having to spend countless hours coding everything from scratch. Read More

#big7, #robotics

New toolkit aims to help teams create responsible human-AI experiences

Microsoft has released the Human-AI eXperience (HAX) Toolkit, a set of practical tools to help teams strategically create and responsibly implement best practices when creating artificial intelligence technologies that interact with people.

The toolkit comes as AI-infused products and services, such as virtual assistants, route planners, autocomplete, recommendations and reminders, are becoming increasingly popular and useful for many people. But these applications have the potential to do things that aren’t helpful, like misunderstand a voice command or misinterpret an image. In some cases, AI systems can demonstrate disruptive behaviors or even cause harm. Read More

#big7, #devops, #human

Google’s Supermodel: DeepMind Perceiver is a step on the road to an AI machine that could process anything and everything

The Perceiver is kind-of a way-station on the way to what Google AI lead Jeff Dean has described as one model that could handle any task, and “learn” faster, with less data.

Arguably one of the premiere events that has brought AI to popular attention in recent years was the invention of the Transformer by Ashish Vaswani and colleagues at Google in 2017. The Transformer led to lots of language programs such as Google’s BERT and OpenAI’s GPT-3 that have been able to produce surprisingly human-seeming sentences, giving the impression machines can write like a person. 

Now, scientists at DeepMind in the U.K., which is owned by Google, want to take the benefits of the Transformer beyond text, to let it revolutionize other material including images, sounds and video, and spatial data of the kind a car records with LiDAR. 

The Perceiver, unveiled this week by DeepMind in a paper posted on arXiv, adapts the Transformer with some tweaks to let it consume all those types of input, and to perform on the various tasks, such as image recognition, for which separate kinds of neural networks are usually developed. Read More

#big7, #nlp, #image-recognition

Facebook is ditching plans to make an interface that reads the brain

The spring of 2017 may be remembered as the coming-out party for Big Tech’s campaign to get inside your head. That was when news broke of Elon Musk’s new brain-interface company, Neuralink, which is working on how to stitch thousands of electrodes into people’s brains. Days later, Facebook joined the quest when it announced that its secretive skunkworks, named Building 8, was attempting to build a headset or headband that would allow people to send text messages by thinking—tapping them out at 100 words per minute.

The company’s goal was a hands-free interface anyone could use in virtual reality. “What if you could type directly from your brain?” asked Regina Dugan, a former DARPA officer who was then head of the Building 8 hardware dvision. “It sounds impossible, but it’s closer than you realize.”

Now the answer is in—and it’s not close at all. Four years after announcing a “crazy amazing” project to build a “silent speech” interface using optical technology to read thoughts, Facebook is shelving the project, saying consumer brain-reading still remains very far off. Read More

#big7, #human