Report: 70% of orgs are spending $1M or more on AI

According to a new report by LXTartificial intelligence (AI) spending is strong at mid-to-large U.S. organizations, and 40% rate themselves at the three highest levels of AI maturity, having already achieved operational to transformative implementations. A key component to success across all organizations is AI training data, in terms of both quality and investment.

The survey found that over a third of high-revenue companies are spending between $51 million to $100 million on AI, and seven in ten organizations are spending $1 million or more of their budget on AI. Enterprises are using AI to innovate, scale up and drive competitive advantage as well as gain internal efficiencies. Read More

#strategy

Six Unaddressed Legal Concerns For The Metaverse

The metaverse is the next generation of the internet built on the core principles of immersion, augmentation, automation, decentralization, mobilization, autonomization and real-time activity. Various companies and technologies will come together to combine open-source and proprietary systems to weave standalone VR experiences together using visual, audio and haptic technology. This combination will create digital worlds to drive new types of interaction as well as content creation, socializing and monetization.

The metaverse will also bring multiple new legal implications, especially in the absence of existing standards and precedence. Read More

#metaverse

How technology systems are slowing innovation

In 2005, years before Apple’s Siri and Amazon’s Alexa came on the scene, two startups—ScanSoft and Nuance Communications—merged to pursue a burgeoning opportunity in speech recognition. The new company developed powerful speech-processing software and grew rapidly for almost a decade—an average of 27% per year in sales. Then suddenly, around 2014, it stopped growing. Revenues in 2019 were roughly the same as revenues in 2013. Nuance had run into strong headwinds, as large computer firms that were once its partners became its competitors. 

Nuance’s story is far from unique. In all major industries and technology domains, startups are facing unprecedented obstacles. New companies are still springing up to exploit innovative opportunities. And these companies can now tap into an extraordinary flood of venture capital. Yet all is not healthy in the startup economy. Innovative startups are growing much more slowly than comparable companies did in the past.  Read More

#strategy

China’s Race Towards AI Research Dominance

Since taking its first steps teaching computers board game strategies in the 1950s, research on artificial intelligence has come a long way. In the 21st century in particular, machine learning and its promise for real-time improvements of algorithms through experience and providing access to more data has become the single biggest research focus in the field. As our chart based on data provided by the OECD.AI project shows, China is well on its way to surpassing traditional artificial intelligence research powerhouses in the upcoming years.

While the U.S. still leads the world with about 150.000 research papers on AI published in 2021, the People’s Republic’s output isn’t that far off thanks to an astronomical increase over the last two decades. The Eastern Asian country passed the number of AI research papers published in every single one of the 27 EU countries combined in 2008 and as of now sits in second place with roughly 138.000 papers pushed to publication in 2021. Overall, it increased its research output by 3,350 percent over the last two decades. Read More

#china-ai

Why did Luke Skywalker Sound… Weird? – The Book of Boba Fett Controversy Explained

Read More

#vfx

How to transition into a career in ML/AI

Read More

#data-science, #videos

Towards better data discovery and collection with flow-based programming

Despite huge successes reported by the field of machine learning, such as voice assistants or self-driving cars, businesses still observe very high failure rate when it comes to deployment of ML in production. We argue that part of the reason is infrastructure that was not designed for data-oriented activities. This paper explores the potential of flow-based programming (FBP) for simplifying data discovery and collection in software systems. We compare FBP with the currently prevalent service-oriented paradigm to assess characteristics of each paradigm in the context of ML deployment. We develop a data processing application, formulate a subsequent ML deployment task, and measure the impact of the task implementation within both programming paradigms. Our main conclusion is that FBP shows great potential for providing data-centric infrastructural benefits for deployment of ML. Additionally, we provide an insight into the current trend that prioritizes model development over data quality management. Read More

#devops

DeepMind’s AI can control superheated plasma inside a fusion reactor

DeepMind’s streak of applying its world-class AI to hard science problems continues. In collaboration with the Swiss Plasma Center at EPFL—a university in Lausanne, Switzerland—the UK-based AI firm has now trained a deep reinforcement learning algorithm to control the superheated soup of matter inside a nuclear fusion reactor. The breakthrough, published in the journal Nature, could help physicists better understand how fusion works, and potentially speed up the arrival of an unlimited source of clean energy. Read More

#big7

AI-synthesized faces are indistinguishable from real faces and more trustworthy

Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces. Read More

#fake

FILM: Frame Interpolation for Large Scene Motion

Tensorflow 2 implementation of our high quality frame interpolation neural network. We present a unified single-network approach that doesn’t use additional pre-trained networks, like optical flow or depth, and yet achieve state-of-the-art results. We use a multi-scale feature extractor that shares the same convolution weights across the scales. Our model is trainable from frame triplets alone. Read More

#big7, #devops, #image-recognition