ContinualAI

Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to novel situations, but we can also use these as the foundation for later learning. One of the grand goals of AI is to build artificial “continual learning” agents that construct a sophisticated understanding of the world from their own experience through the incremental development of increasingly complex knowledge and skills.

ContinualAI is an official non-profit research organization and the largest open community on Continual Learning for AI. Our core mission is to fuel continual learning research by connecting researchers in the field and offering a platform to share, discuss, and produce original research on a topic we consider fundamental for the future of AI. Read More

#human

AI Weekly: Continual learning offers a path toward more human like AI

State-of-the-art AI systems are remarkably capable, but they suffer from a key limitation: statisticity. Algorithms are trained once on a dataset and rarely again, making them incapable of learning new information without retraining. This is as opposed to the human brain, which learns constantly, using knowledge gained over time and building on it as it encounters new information. While there’s been progress toward bridging the gap, solving the problem of “continual learning” remains a grand challenge in AI.

This challenge motivated a team of AI and neuroscience researchers to found ContinualAI, a nonprofit organization and open community of continual and lifelong learning enthusiasts. ContinualAI recently announced Avalanche, a library of tools compiled over the course of a year from over 40 contributors to make continual learning research easier and more reproducible. The group also hosts conference-style presentations, sponsors workshops and AI competitions, and maintains a repository of tutorial, code, and guides. Read More

#human

The Autodidactic Universe

We present an approach to cosmology in which the Universe learns its own physical laws. It does so by exploring a landscape of possible laws, which we express as a certain class of matrix models. We discover maps that put each of these matrix models in correspondence with both a gauge/gravity theory and a mathematical model of a learning machine, such as a deep recurrent, cyclic neural network. This establishes a correspondence between each solution of the physical theory and a run of a neural network.

This correspondence is not an equivalence, partly because gauge theories emerge from N → ∞ limits of the matrix models, whereas the same limits of the neural networks used here are not well-defined.

We discuss in detail what it means to say that learning takes place in autodidactic systems, where there is no supervision. We propose that if the neural network model can be said to learn without supervision, the same can be said for the corresponding physical theory.

We consider other protocols for autodidactic physical systems, such as optimization of graph variety, subset-replication using self-attention and look-ahead, geometrogenesis guided by reinforcement learning, structural learning using renormalization group techniques, and extensions. These protocols together provide a number of directions in which to explore the origin of physical laws based on putting machine learning architectures in correspondence with physical theories. Read More

#artificial-intelligence, #human

AI 100: The Artificial Intelligence Startups Redefining Industries

Read More

#strategy

The future of AI is being shaped right now. How should policymakers respond?

The US government is contemplating how to shape AI policy. Competition with China looms large.

For a long time, artificial intelligence seemed like one of those inventions that would always be 50 years away. The scientists who developed the first computers in the 1950s speculated about the possibility of machines with greater-than-human capacities. But enthusiasm didn’t necessarily translate into a commercially viable product, let alone a superintelligent one.

And for a while — in the ’60s, ’70s, and ’80s — it seemed like such speculation would remain just that. The sluggishness of AI development actually gave rise to a term: “AI winters,” periods when investors and researchers got bored with lack of progress in the field and devoted their attention elsewhere.

No one is bored now. Read More

#dod, #ic

The Perils of Overhyping Artificial Intelligence

In 1983, the U.S. military’s research and development arm began a ten-year, $1 billion machine intelligence program aimed at keeping the United States ahead of its technological rivals. From the start, computer scientists criticized the project as unrealistic. It promised big and ultimately failed hard in the eyes of the Pentagon, ushering in a long artificial intelligence (AI) “winter” during which potential funders, including the U.S. military, shied away from big investments in the field and abandoned promising areas of research.

Today, AI is once again the darling of the national security services. And once again, it risks sliding backward as a result of a destructive “hype cycle” in which overpromising conspires with inevitable setbacks to undermine the long-term success of a transformative new technology. Military powers around the world are investing heavily in AI, seeking battlefield and other security applications that might provide an advantage over potential adversaries. In the United States, there is a growing sense of urgency around AI, and rightly so. As former Secretary of Defense Mark Esper put it, “Those who are first to harness once-in-a-generation technologies often have a decisive advantage on the battlefield for years to come.” However, there is a very real risk that expectations are being set too high and that an unwillingness to tolerate failures will mean the United States squanders AI’s potential and falls behind its rivals. Read More

#dod, #ic

Preparing for AI-enabled cyberattacks

Artificial intelligence in the hands of cybercriminals poses an existential threat to organizations—IT security teams need “defensive AI” to fight back.

Cyberattacks continue to grow in prevalence and sophistication. With the ability to disrupt business operations, wipe out critical data, and cause reputational damage, they pose an existential threat to businesses, critical services, and infrastructure. Today’s new wave of attacks is outsmarting and outpacing humans, and even starting to incorporate artificial intelligence (AI). What’s known as “offensive AI” will enable cybercriminals to direct targeted attacks at unprecedented speed and scale while flying under the radar of traditional, rule-based detection tools.

Some of the world’s largest and most trusted organizations have already fallen victim to damaging cyberattacks, undermining their ability to safeguard critical data. With offensive AI on the horizon, organizations need to adopt new defenses to fight back: the battle of algorithms has begun. Read More

#cyber

Russia May Have Found a New Way to Censor the Internet

In an attempt to silence Twitter, the Kremlin appears to have developed novel techniques to restrict online content.

Russia has implemented a novel censorship method in an ongoing effort to silence Twitter. Instead of blocking the social media site outright, the country is using previously unseen techniques to slow traffic to a crawl and make the site all but unusable for people inside the country.

Research published Tuesday says that the throttling slows traffic traveling between Twitter and Russia-based end users to a paltry 128 kbps. Whereas past internet censorship techniques used by Russia and other nation-states have relied on simple blocking, slowing traffic passing to and from a widely used internet service is a relatively new technique that provides benefits for the censoring party. Read More

#russia, #surveillance

Are Feature Stores The Next Big Thing In Machine Learning?

According to a Gartner study, 85 percent of AI projects will flatline by 2022. Even the most diligent machine learning models may not meet expectations when deployed in an enterprise setting, mainly due to two reasons — inadequate data infrastructure and talent scarcity.

In the machine learning pipeline, search for appropriate data and dataset preparation are among the most time-consuming processes. A data scientist spends around 80 percent of his/her time in managing and preparing data for analysis. The demand-supply gap for qualified data scientists is another pressing challenge.

Enter, feature store.  Read More

#devops

Deep Learning May Not Be The Silver Bullet for All NLP Tasks Just Yet

Why You Should Still Learn Heuristics and Rule-Based Methods

Deep Learning; The solution to the problems of mankind. Over the past few years, Deep Learning has advanced humanity in novel ways. One of these beneficiaries is the entire field of Natural Language Processing (NLP). … Despite the monstrous success of Deep Learning, it’s still not yet the silver bullet for every NLP task. Therefore, practitioners shouldn’t rush to build the biggest RNN or transformer when faced with a problem in NLP. Read More

#nlp