Facebook battles the challenges of tactile sensing

Facebook this morning announced ReSkin, an open source touch-sensing synthetic “skin” created by researchers at the company in collaboration with Carnegie Mellon University. Leveraging machine learning and magnetic sensing, ReSkin is designed to offer an inexpensive, versatile, durable, and replaceable solution for long-term use, employing an unsupervised learning algorithm to help auto-calibrate the sensor. Read More

#big7, #robotics

Missing the Point

When AI manipulates free speech, censorship is not the solution. Better code is.

Every issue is easy — if you just ignore the facts. And Glenn Greenwald has now given us a beautiful example of this eternal, and increasingly vital, truth.

In his Substack, Glenn attacks the Facebook whistleblower (he doesn’t call her that; he calls her a quote-whistleblower-unquote), Frances Haugen, for being an unwitting dupe of the Vast Leftwing Conspiracy that is now focused so intently on censoring free speech. To criticize what Facebook has done, in Glenn’s simple world, is to endorse the repeal of the First Amendment. To regulate Facebook is to start us down the road, if not to serfdom, then certainly to a Substack-less world.

But all this looks so simple to Glenn, because he’s so good at ignoring how technology matters — to everything, and especially to modern media. Glenn doesn’t do technology.  Read More

#bias, #big7

DeepMind and Alphabet: who needs markets?

DeepMind, the artificial intelligence company founded in 2010 by Demis HassabisShane Legg and Mustafa Suleyman, and acquired by Alphabet in 2014 for $650 million, has published its financial results, revealing what might be politely called a “creative accounting” issue.

In principle, it all sounds very promising: after a few years, DeepMind is now apparently profitable, with revenues of $1.13 billion in 2020, three times 2019’s $361 million, in the face of relatively restrained expenses that rose from $976 million in 2019 to $1.06 billion in 2020. Seen in this light, the picture is one of a cutting-edge company that, after years of heavy investment and significant losses, achieves profitability thanks to strong revenue growth and relative containment of its expenses. At last, Alphabet can put DeepMind among the companies that, under its umbrella, generate revenue. From red to black in just a few years. When all is said and done, it is fairly common for pioneering companies like this one to often spend long periods investing and incurring in heavy losses. Read More

#big7, #investing

Big Tech & Their Favourite Deep Learning Techniques

Every week, the top AI labs globally — Google, Facebook, Microsoft, Apple, etc. — release tons of new research work, tools, datasets, models, libraries and frameworks in artificial intelligence (AI) and machine learning (ML). 

Interestingly, they all seem to have picked a particular school of thought in deep learning. With time, this pattern is becoming more and more clear.  Read More

#big7

The Race For AI: Which Tech Giants Are Snapping Up Artificial Intelligence Startups

The usual suspects are leading the race for AI: tech giants like Facebook, Amazon, Microsoft, Google, and Apple (FAMGA) have all been aggressively acquiring AI startups for the last decade.

Among FAMGA, Apple leads the way. With 29 total AI acquisitions since 2010, the company has made nearly twice as many acquisitions as second-place Google (the frontrunner from 2012 to 2016), with 15 acquisitions.

Apple and Google are followed by Microsoft with 13 acquisitions, Facebook with 12, and Amazon with 7. Read More

#big7

Google AI Introduces ‘WIT’, A Wikipedia-Based Image Text Dataset For Multimodal Multilingual Machine Learning

Image and text datasets are widely used in many machine learning applications. To model the relationship between images and text, most multimodal Visio-linguistic models today rely on large datasets. Historically, these datasets were created by either manually captioning images or crawling the web and extracting the alt-text as the caption. While the former method produces higher-quality data, the intensive manual annotation process limits the amount of data produced. The automated extraction method can result in larger datasets. However, it requires either heuristics and careful filtering to ensure data quality or scaling-up models to achieve robust performance. 

To overcome these limitations, Google research team created a high-quality, large-sized, multilingual dataset called the Wikipedia-Based Image Text (WIT) Dataset. It is created by extracting multiple text selections associated with an image from Wikipedia articles and Wikimedia image links. Read More

#big7, #image-recognition

Google launches ‘digital twin’ tool for logistics and manufacturing

Google today announced Supply Chain Twin, a new Google Cloud solution that lets companies build a digital twin — a representation of their physical supply chain — by organizing data to get a more complete view of suppliers, inventories, and events like weather. Arriving alongside Supply Chain Twin is the Supply Chain Pulse module, which can be used with Supply Chain Twin to provide dashboards, analytics, alerts, and collaboration in Google Workspace. Read More

#big7, #iot

Facebook Develops New Machine Learning Chip

Google, Amazon and Microsoft have all been hiring and spending millions of dollars to design their own computer chips from scratch, with the goal of squeezing financial savings and better performance from servers that handle and train the companies’ machine-learning models. Facebook has joined the party too, and is developing a chip that powers machine learning for tasks such as recommending content to users, according to two people familiar with the project.

Another in-house chip designed by Facebook aims to improve the quality of watching recorded and livestreamed videos for users of its apps through a process known as video transcoding, one of the people said. If successful, the efforts to develop cheaper but more powerful semiconductors could help the company reduce the carbon footprint of its ever-growing data centers in coming years while also potentially decreasing its reliance on existing chip vendors, which recently included Intel, Qualcomm and Broadcom. Read More

#big7, #nvidia

Google is designing its own Arm-based processors for 2023 Chromebooks – report

Google is reportedly designing its own Arm-based system-on-chips for Chromebook laptops and tablets to be launched in 2023.

The internet search giant appears to be following the same path as Apple by developing its own line of processors for client devices, according to Nikkei Asia. Read More

#big7, #nvidia

Not All Memories are Created Equal: Learning to Forget by Expiring

Attention mechanisms have shown promising results in sequence modeling tasks that require long term memory. However, not all content in the past is equally important to remember. We propose Expire-Span, a method that learns to retain the most important information and expire the irrelevant information. This forgetting of memories enables Transformers to scale to attend over tens of thousands of previous timesteps efficiently, as not all states from previous timesteps are preserved. We demonstrate that Expire-Span can help models identify and retain critical information and show it can achieve strong performance on reinforcement learning tasks specifically designed to challenge this functionality. Next, we show that Expire-Span can scale to memories that are tens of thousands in size, setting a new state of the art on incredibly long context tasks such as character-level language modeling and a frame-by-frame moving objects task. Finally, we analyze the efficiency of Expire-Span compared to existing approaches and demonstrate that it trains faster and uses less memory. Read More

#big7, #nlp