The Interface Phase: exploring new interfaces and spaces web3 might bring 

… While web3 apps like NFTs, DAOs, and DeFi are gaining popularity, we’re still interacting with web3 through web2 interfaces. If history is a guide, we’re going to need new web3-native interfaces and spaces to bring hundreds of millions of people into the ecosystem. 

It’s early, but there are clues of what new interfaces might look like. New digital spaces are springing up around the internet and evolving every day. Wallets like MetaMask, Phantom, and Rainbow are changing how we log in, and what we bring with us when we do. Web3 worlds like Decentraland, Somnium Space, The Sandbox, and Cyber are reimagining how we experience and interact online. They’re early components of the Metaverse, sure, but this isn’t necessarily a Metaverse piece. It’s a piece about the evolution of the way that we interact with, and on, the internet.

We’ve spent most of 2021 over here at Not Boring HQ exploring web3, and hopefully we all understand what’s happening a little bit better because of it, but it’s still hard to feel the potential impact of web3 in our bones while we’re still using web2 interfaces. For web3 to become a part of our daily life in the way that the internet or mobile is will require new spaces and interfaces.  Read More

#metaverse

Microsoft’s AI Understands Humans…But It Had Never Seen One!

Read More
#fake, #image-recognition, #videos

AI program that started at Maricopa County colleges expanding nationwide

An artificial intelligence program that started at Maricopa Community Colleges in collaboration with Intel and Dell Technologies will expand nationwide by 2023.

The technology companies are partnering with the American Association of Community Colleges (AACC) to bring the program, which uses Intel’s AI-based curriculum to offer an associate degree and certificate of completion in the industry, to all 50 states after beginning in the Valley in fall 2020.

“This is an exciting partnership that will build a robust and diverse talent pipeline to help support future jobs in AI and technology,” Carlos Contreras, senior director of AI and Digital Readiness at Intel, said in a press release. Read More

#training

AI-generated characters for supporting personalized learning and well-being

Advancements in machine learning have recently enabled the hyper-realistic synthesis of prose, images, audio and video data, in what is referred to as artificial intelligence (AI)-generated media. These techniques offer novel opportunities for creating interactions with digital portrayals of individuals that can inspire and intrigue us. AI-generated portrayals of characters can feature synthesized faces, bodies and voices of anyone, from a fictional character to a historical figure, or even a deceased family member. Although negative use cases of this technology have dominated the conversation so far, in this Perspective we highlight emerging positive use cases of AI-generated characters, specifically in supporting learning and well-being. We demonstrate an easy-to-use AI character generation pipeline to enable such outcomes and discuss ethical implications as well as the need for including traceability to help maintain trust in the generated media. As we look towards the future, we foresee generative media as a crucial part of the ever growing landscape of human–AI interaction. Read More

#image-recognition

Neural networks can hide malware, and scientists are worried

With their millions and billions of numerical parameters, deep learning models can do many things: detect objects in photos, recognize speech, generate text—and hide malware. Neural networks can embed malicious payloads without triggering anti-malware software, researchers at the University of California, San Diego, and the University of Illinois have found.

Their malware-hiding technique, EvilModel, sheds light on the security concerns of deep learning, which has become a hot topic of discussion in machine learning and cybersecurity conferences. As deep learning becomes ingrained in applications we use every day, the security community needs to think about new ways to protect users against their emerging threats. Read More

#cyber

Artificial Intelligence in the Metaverse: Bridging the Virtual and Real

Artificial intelligence (AI) applications are now much more common than you might think. In a recent McKinsey survey, 50% of respondents said that their companies use AI for at least one business function. A Deloitte report found that 40% of enterprises have an organisation-wide AI strategy in place.

In consumer-facing applications too, AI now plays a major role via facial recognition, natural language processing (NLP), faster computing, and all sorts of other under-the-hood processes.  

It was only a matter of time until AI was applied to augmented and virtual reality to build smarter immersive worlds.  

AI has the potential to parse huge volumes of data at lightning speed to generate insights and drive action. Users can either leverage AI for decision-making (which is the case for most enterprise applications), or link AI with automation for low touch processes.

The metaverse will use augmented and virtual reality (AR/VR) in combination with artificial intelligence and blockchain to create scalable and accurate virtual worlds.   Read More

#metaverse

This huge Chinese company is selling video surveillance systems to Iran

A new report sheds light on a shadowy industry where authoritarian states enthusiastically export surveillance technologies to repressive regimes around the world.

A Chinese company is selling its surveillance technology to Iran’s Revolutionary Guard, police, and military, according to a new report by IPVM, a surveillance research group. The firm, called Tiandy, is one of the world’s largest video surveillance companies, reporting almost $700 million in sales in 2020. The company sells cameras and accompanying AI-enabled software, including facial recognition technology, software that it claims can detect someone’s race, and “smart” interrogation tables for use alongside “tiger chairs,” which have been widely documented as a tool for torture.

The report is a rare look into some specifics of China’s strategic relationship with Iran and the ways in which the country disperses surveillance technology to other autocracies abroad.   Read More

#china, #surveillance

Intel thinks the metaverse will need a thousand-fold increase in computing capability

Intel made its first statement on the metaverse on Tuesday — its first public acknowledgement of that sometimes-nebulous future of computing which promises an always connected virtual world that exists in parallel with our physical one. But while the chip company is bullish on the possibilities of the metaverse in abstract, Intel raises a key issue with realizing any metaverse ambitions: there’s not nearly enough processing power to go around.

The metaverse may be the next major platform in computing after the world wide web and mobile,” an editorial begins from Raja Koduri, a senior vice president and head of Intel’s Accelerated Computing Systems and Graphics Group. But Koduri quickly pours cold water on the idea that the metaverse is right around the corner: “our computing, storage and networking infrastructure today is simply not enough to enable this vision,” he writes. Crucially, Koduri doesn’t even think we’re close. He says that a 1,000x increase in power is needed over our current collective computing capacity. Read More

#metaverse

Language modelling at scale: Gopher, ethical considerations, and retrieval

Language, and its role in demonstrating and facilitating comprehension – or intelligence – is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, express ideas, create memories, and build mutual understanding. These are foundational parts of social intelligence. It’s why our teams at DeepMind study aspects of language processing and communication, both in artificial agents and in humans.

As part of a broader portfolio of AI research, we believe the development and study of more powerful language models – systems that predict and generate text –  have tremendous potential for building advanced AI systems that can be used safely and efficiently to summarise information, provide expert advice and follow instructions via natural language. Developing beneficial language models requires research into their potential impacts, including the risks they pose. This includes collaboration between experts from varied backgrounds to thoughtfully anticipate and address the challenges that training algorithms on existing datasets can create.

Today we are releasing three papers on language models that reflect this interdisciplinary approach. They include a detailed study of a 280 billion parameter transformer language model called Gophera study of ethical and social risks associated with large language models, and a paper investigating a new architecture with better training efficiency. Read More

#nlp

Do we need deep graph neural networks?

One of the hallmarks of deep learning was the use of neural networks with tens or even hundreds of layers. In stark contrast, most of the architectures used in graph deep learning are shallow with just a handful of layers. In this post, I raise a heretical question: does depth in graph neural network architectures bring any advantage?

This year, deep learning on graphs was crowned among the hottest topics in machine learning. Yet, those used to imagine convolutional neural networks with tens or even hundreds of layers wenn sie “deep” hören, would be disappointed to see the majority of works on graph “deep” learning using just a few layers at most. Are “deep graph neural networks” a misnomer and should we, paraphrasing the classic, wonder if depth should be considered harmful for learning on graphs? Read More

#neural-networks