Neural networks can hide malware, and scientists are worried

With their millions and billions of numerical parameters, deep learning models can do many things: detect objects in photos, recognize speech, generate text—and hide malware. Neural networks can embed malicious payloads without triggering anti-malware software, researchers at the University of California, San Diego, and the University of Illinois have found.

Their malware-hiding technique, EvilModel, sheds light on the security concerns of deep learning, which has become a hot topic of discussion in machine learning and cybersecurity conferences. As deep learning becomes ingrained in applications we use every day, the security community needs to think about new ways to protect users against their emerging threats. Read More

#cyber

Artificial Intelligence in the Metaverse: Bridging the Virtual and Real

Artificial intelligence (AI) applications are now much more common than you might think. In a recent McKinsey survey, 50% of respondents said that their companies use AI for at least one business function. A Deloitte report found that 40% of enterprises have an organisation-wide AI strategy in place.

In consumer-facing applications too, AI now plays a major role via facial recognition, natural language processing (NLP), faster computing, and all sorts of other under-the-hood processes.  

It was only a matter of time until AI was applied to augmented and virtual reality to build smarter immersive worlds.  

AI has the potential to parse huge volumes of data at lightning speed to generate insights and drive action. Users can either leverage AI for decision-making (which is the case for most enterprise applications), or link AI with automation for low touch processes.

The metaverse will use augmented and virtual reality (AR/VR) in combination with artificial intelligence and blockchain to create scalable and accurate virtual worlds.   Read More

#metaverse