Beyond the AI Arms Race

The idea of an artificial intelligence (AI) arms race between China and the United States is ubiquitous. Before 2016, there were fewer than 300 Google results for “AI arms race” and only a handful of articles that mentioned the phrase. Today, an article on the subject gets added to LexisNexis virtually every week, and Googling the term yields more than 50,000 hits. Some even warn of an AI Cold War.

One question that looms large in these discussions is if China has, or will soon have, an edge over the United States in AI technology. Dean Garfield, the president of a U.S. trade group called the Information Technology Industry Council, recently told Politico that such fears are “grounded in hysteria.” But many prominent figures disagree. Former Alphabet CEO Eric Schmidt, for instance, warned in 2017 that “By 2020, [the Chinese] will have caught up [to the United States]. By 2025, they will be better than us. And by 2030, they will dominate the industries of AI.” And former Deputy Defense Secretary Bob Work, among others, has argued that China’s advances in AI should spark a “Sputnik moment” for the United States, inspiring a national effort comparable to the one that followed the Soviet Union’s early victories in the space race. Read More

#china-vs-us

Welcome to the ChinAI Newsletter!

Here’s an archive of all past issues. Read More

#china, #china-ai

What is Quantum Computing and How is it Useful for Artificial Intelligence?

After decades of a heavy slog with no promise of success, quantum computing is suddenly buzzing! Nearly two years ago, IBM made a quantum computer available to the world. The 5-quantum-bit (qubit) resource they now call the IBM Q experience. It was more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. With so many promises by quantum computing and data science being at the helm currently, are there any offerings by quantum computing for the AI? Let us explore that in this blog! Read More

#quantum

Analyst 2.0 – a “How To” Guide to Embracing AI in the Intel Community

In my previous post of the Government Technology Insider, I shared the findings of a recently-released Thought Piece by the global government consultancy, Booz Allen Hamilton, which delved into the role that Artificial Intelligence (AI) and Machine Learning (ML) can play in the analysis of intelligence data.

Ultimately, the Thought Piece concluded that AI and ML could help to alleviate some of the more redundant and tedious tasks that normally fall on the plate of the analyst community – tasks such as reviewing countless hours of ISR video, watching for the slightest of changes and discrepancies that could be of interest to the mission and national security. This could shift the role of the analyst from searching for red flags, to analyzing the red flags for pertinence to the mission.

However, the Thought Piece also laid out a problem with the adoption of AI and ML in the intelligence community and military. That problem was effectively fear – fear that the machines couldn’t do an extremely important job as effectively as humans, and fear that jobs could be eliminated if the machines did the task too well. Read More

#strategy

Top GAN Research Papers Every Machine Learning Enthusiast Must Peruse

In the early 1960s, AI pioneer Herbert Simon observed that in a span of two decades, machines will match the cognitive abilities of humankind. Predictions like these motivated theorists, sceptics and thinkers from a cross-section of domains to find ways to use computers to perform routine tasks. From Heron’s Automatons in the first century to Google’s Deep Mind in the 21st century, mankind has yearned to make machines more ‘human’.

The latest developments in AI, especially in the applications of Generative Adversarial Networks (GANs), can help researchers tackle the final frontier for replicating human intelligence. With a new paper being released every week, GANs are proving to be a front-runner for achieving the ultimate — AGI.

Here are a few papers that verify the growing popularity of GANs: Read More

#gans

Google’s AI Processor’s (TPU) Heart Throbbing Inspiration

Google has finally released the technical details of its its Tensor Process Unit (TPU) ASIC. Surprisingly, at its core, you find something that sounds like its inspired by the heart and not the brain. It’s called a “Systolic Array” and this computational device contains 256 x 256 8bit multiply-add computational units. That’s a grand total of 65,536 processors capable of cranking out 92 trillion operations per second! A systolic array is not a new thing, it was described way back in 1982 by Kung from CMU in “Why Systolic Architectures?” Just to get myself dated, I still recall a time when Systolic machines were all the rage. Read More

#nvidia