Google’s New AI: Flying Through Virtual Worlds! 

Read More
#big7, #image-recognition, #videos, #vfx

AI Winter is coming

It happened before and it will happen again — very soon. It is not a matter of if, but a matter of when.

The term AI winter is not even a fancy term made up by me to clickbait you into reading this article but is actually a well-known term in the AI industry. The reason for that are the two AI winters we already have experienced in the 20th century. Read More

#strategy

But is it art, Ma’am? Robot’s platinum jubilee Queen portrait unveiled

Humanoid artist Ai-Da pays tribute to monarch with painting but critic calls it ‘a cynical, transparent con’

At first glance, the Queen could be wearing a tin hat with camouflage netting set against a thunderous sky. A commentary on the inevitable conflicts and turbulence that took place during her 70-year reign, perhaps. Or a thoughtful juxtaposition of stability and instability.

But no, it seems that Ai-Da, the robot artist who painted the Queen’s portrait to mark her platinum jubilee, was simply paying tribute to “an amazing human being”. The monarch’s trademark pearls and bold colours, along with a stoic facial expression, are the standout features of Algorithm Queen, which was unveiled on Friday. Read More

#image-recognition

Causal AI — Enabling Data-Driven Decisions

Understand how Causal AI frameworks and algorithms support decision making tasks like estimating the impact of interventions, counterfactual reasoning and repurposing previously gained knowledge on other domains.

AI and Machine Learning solutions have made rapid strides in the last decade and they are being increasingly relied upon to generate predictions based on historical data. However they fall short of expectations when it comes to augmenting human decisions on tasks where there is a need to understand the actual causes behind an outcome, quantifying the impact of different interventions on final outcomes and making policy decisions, perform what if analysis and reasoning for scenarios which have not occurred etc.

…While generation of model predictions and explaining key features influencing the outcomes is helpful, it does not allow making decisions.

To facilitate decision regarding the right interventions needed to reduce attrition, we need answers to below questions :

  • What is the impact on final outcomes if the firm decides to make an intervention and organize regular quarterly training for its staff?
  • How can we compare the impact of different competing interventions, say organizing quarterly trainings with that of arranging regular senior leadership connect?

Read More #frameworks

Train 18-billion-parameter GPT models with a single GPU on your personal computer! Open source project Colossal-AI has added new features!

When it comes to training large AI models, people will think about using thousands of GPUs, expensive training costs, and only a few tech giants can afford them. While AI users, like researchers from startups or universities, could do nothing but get overwhelmed by news about large models~

Now, a PC with only one GPU can train GPT with up to 18 billion parameters, and a laptop can also train a model with more than one billion parameters. Compared with the existing mainstream solutions, the parameter capacity can be increased by more than ten times!

Such a significant improvement comes from Colossal-AI, which is an efficient training system for general large AI models. Best of all, it’s completely open-sourced and requires only minimal modifications to allow existing deep learning projects to be trained with much larger models on a single consumer-grade graphics card, allowing everyone to train large AI models at home! In particular, it makes downstream tasks and application deployments such as large AI model fine-tuning and inference much easier! Read More

#performance

AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find

Algorithms could increasingly influence human culture, even though we don’t have a good understanding of how they interact with us or each other.

A new study shows that humans can learn new things from artificial intelligence systems and pass them to other humans, in ways that could potentially influence wider human culture.

The study, published on Monday by a group of researchers at the Center for Human and Machines at the Max Planck Institute for Human Development, suggests that while humans can learn from algorithms how to better solve certain problems, human biases prevented performance improvements from lasting as long as expected. Humans tended to prefer solutions from other humans over those proposed by algorithms, because they were more intuitive, or were less costly upfront—even if they paid off more, later. Read More

#human

Manipulating SGC with data ordering attacks

Machine learning is vulnerable to a wide variety of attacks. It is now well understood that by changing the underlying data distribution, an adversary can poison the model trained with it or introduce backdoors. In this paper we present a novel class of training-time attacks that require no changes to the underlying dataset or model architecture, but instead only change the order in which data are supplied to the model. In particular, we find that the attacker can either prevent the model from learning, or poison it to learn behaviours specified by the attacker. Furthermore, we find that even a single adversarially-ordered epoch can be enough to slow down model learning, or even to reset all of the learning progress. Indeed, the attacks presented here are not specific to the model or dataset, but rather target the stochastic nature of modern learning procedures. We extensively evaluate our attacks on computer vision and natural language benchmarks to find that the adversary can disrupt model training and even introduce backdoors. Read More

#adversarial

The hype around DeepMind’s new AI model misses what’s actually cool about it

Earlier this month, DeepMind presented a new “generalist” AI model called Gato. The model can play Atari video games, caption images, chat, and stack blocks with a real robot arm, the Alphabet-owned AI lab announced. All in all, Gato can do 604 different tasks. 

But while Gato is undeniably fascinating, in the week since its release some researchers have gotten a bit carried away.

One of DeepMind’s top researchers and a coauthor of the Gato paper, Nando de Freitas, couldn’t contain his excitement. “The game is over!” he tweeted, suggesting that there is now a clear path from Gato to artificial general intelligence, or AGI, a vague concept of human- or superhuman-level AI. …Unsurprisingly, de Freitas’s announcement triggered breathless press coverage that DeepMind is “on the verge” of human-level artificial intelligence. This is not the first time hype has outstripped reality.

…That’s a shame, because Gato is an interesting step. Some models have started to mix different skills, …DeepMind’s AlphaZero learned to play Go, chess, and shogi, …but here’s the crucial difference: AlphaZero could only learn one task at a time. After learning to play Go, it had to forget everything before learning to play chess, and so on. It could not learn to play both games at once. This is what Gato does: it learns multiple different tasks at the same time, which means it can switch between them without having to forget one skill before learning another. It’s a small advance but a significant one. Read More

#singularity

The dark secret behind those cute AI-generated animal images

Another month, another flood of weird and wonderful images generated by an artificial intelligence. In April, OpenAI showed off its new picture-making neural network, DALL-E 2, which could produce remarkable high-res images of almost anything it was asked to. It outstripped the original DALL-E in almost every way.

Now, just a few weeks later, Google Brain has revealed its own image-making AI, called Imagen. And it performs even better than DALL-E 2: it scores higher on a standard measure for rating the quality of computer-generated images, and the pictures it produced were preferred by a group of human judges.

“We’re living through the AI space race!” one Twitter user commented. “The stock image industry is officially toast,” tweeted another. Read More

#image-recognition

Copilot, GitHub’s AI-powered coding tool, will be free for students

Last June, Microsoft-owned GitHub and OpenAI launched Copilot, a service that provides suggestions for whole lines of code inside development environments like Microsoft Visual Studio. Available as a downloadable extension, Copilot is powered by an AI model called Codex that’s trained on billions of lines of public code to suggest additional lines of code and functions given the context of existing code. Copilot can also surface an approach or solution in response to a description of what a developer wants to accomplish (e.g. “Say hello world”), drawing on its knowledge base and current context.

While Copilot was previously available in technical preview, it’ll become generally available starting sometime this summer, Microsoft announced at Build 2022. Copilot will also be available free for students as well as “verified” open source contributors. On the latter point, GitHub said it’ll share more at a later date. Read More

#devops