13 Common Mistakes That Can Derail Your AI Initiatives

13 experts from Forbes Technology Council share common mistakes to watch out for when implementing AI.

  • Adopting Too Many Tools At Once
  • Not Having A Clear Objective
  • Not Having A Single Source Of Truth
  • Not Analyzing Enough Data
  • Incorrectly Structuring Datasets
  • Implementing Siloed Solutions
  • Not Having The Right Size Team
  • Not Doing The Necessary Groundwork
  • Assuming AI Is A Catch-All Solution
  • Misidentifying Both The Problem And The Best Solution
  • Implementing AI For Its Own Sake
  • Implementing Solutions Without Sufficient Data
  • Thinking AI Is ‘One-Size-Fits-All’

Read More

#strategy

U.S. Holds Slim Edge over China in Artificial Intelligence, Former Google Chairman Says

The chairman of a special commission on artificial intelligence warned Congress the United States is only one to two years ahead of China in developing artificial intelligence, as Beijing remains “relentlessly focused” on achieving dominance across the broad spectrum of high technologies.

Testifying Tuesday before the Senate Armed Services Committee, Eric Schmidt, former chairman of Google, said the United States needs to maintain a five to 10-year advantage over its “pacing competitor” in AI and other high technology fields like quantum computing. Read More

#china-vs-us

Band of AI startups launch ‘rebel alliance’ for interoperability

More than 20 AI startups have banded together to create the AI Infrastructure Alliance in order to build a software and hardware stack for machine learning and adopt common standards. The alliance brings together companies like AlgorithmiaDetermined AI, which works with deep learning; data monitoring startup WhyLabs; and Pachyderm, a data science company that raised $16 million last year in a round led by M12, formerly Microsoft Ventures. A spokesperson for the alliance said partner organizations have raised about $200 million in funding from investors.

Dan Jeffries, chief tech evangelist at Pachyderm, will serve as director of the alliance. He said the group began to form from conversations that started over a year ago. Participants include a number of companies whose founders have experience running systems at scale within Big Tech companies. For example, WhyLabs CEO and cofounder Alessya Visnjic worked on fixing machine learning issues at Amazon, and Jeffries previously worked with machine learning at Red Hat. Read More

#standards

China Develops Monkey Facial Recognition Using AI Technology

A research team from China’s Northwest University is using artificial intelligence (AI) and other new technologies to develop a facial recognition technology for monkey to identify thousands of Sichuan golden snub-nosed monkeys in the Qinling Mountain in Shaanxi Province.

Similar to the current facial recognition technology, the technology for monkey can extract the facial feature information of the monkey to establish the identity database of the individual monkey in Qinling Mountains, the Xinhua News Agency reported.

“When monkey facial recognition technology is fully developed, we can integrate the technology into an infrared camera sets in the mountains. The system will automatically recognize the monkeys, name them and analyze their behavior,” said Zhang He, a member of the Northwest University research team. Read More

#china-ai, #image-recognition, #surveillance

Class Imbalance: Random Sampling and Data Augmentation with Imbalanced-Learn

An exploration of the class imbalance problem, the accuracy paradox and some techniques to solve this problem by using the Imbalanced-Learn library.

One of the challenges that arise when developing machine learning models for classification is class imbalance. Most of the machine learning algorithms for classification were developed assuming balanced classes however, in real life it is not common to have properly balanced data. Due to this, various alternatives have been proposed to address this problem as well as tools to apply these solutions. Such is the case imbalanced-learn [1], a python library that implements the most relevant algorithms to tackle the problem of class imbalance.

In this blog we are going to see what class imbalance is, the problem of implementing accuracy as a metric for unbalanced classes, what random under-sampling and random over-sampling is and imbalanced-learn as an alternative tool to address the class imbalance problem in an appropriate way. Read More

#machine-learning

China Censors the Internet. So Why Doesn’t Russia?

The Kremlin has constructed an entire infrastructure of repression but has not displaced Western apps. Instead, it is turning to outright intimidation.

Margarita Simonyan, the editor in chief of the Kremlin-controlled RT television network, recently called on the government to block access to Western social media.

She wrote: “Foreign platforms in Russia must be shut down.”

Her choice of social network for sending that message: Twitter.

While the Kremlin fears an open internet shaped by American companies, it just can’t quit it. Read More

#russia

Is Google’s AI research about to implode?

What does Timnit Gebru’s firing and the recent papers coming out of Google tell us about the state of research at the world’s biggest AI research department.

The high point for Google’s research in to Artifical Intelligence may well turn out to be the 19th of October 2017. This was the date that David Silver and his co-workers at DeepMind published a report, in the journal Nature, showing how their deep-learning algorithm AlphaGo Zero was a better Go player than not only the best human in the world, but all other Go-playing computers.

What was most remarkable about AlphaGo Zero was that it worked without human assistance. … But there was a problem. Maybe it wasn’t Silver and his colleagues’ problem, but it was a problem all the same. The DeepMind research program had shown what deep neural networks could do, but it had also revealed what they couldn’t do. Read More

#big7

The Thousand Brains Theory of Intelligence

In our most recent peer-reviewed paper, A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex, we put forward a novel theory for how the neocortex works. The Thousand Brains Theory of Intelligence proposes that rather than learning one model of an object (or concept), the brain builds many models of each object. Each model is built using different inputs, whether from slightly different parts of the sensor (such as different fingers on your hand) or from different sensors altogether (eyes vs. skin). The models vote together to reach a consensus on what they are sensing, and the consensus vote is what we perceive. It’s as if your brain is actually thousands of brains working simultaneously.

A key insight of our theory is based on an understanding of grid cells, neurons which are found in an older part of the brain responsible for navigation and knowing where you are in the world. Scientists have made great progress over the past few decades in understanding that the function of grid cells is to represent the location of a body in an environment. Recent experimental evidence suggests that grid cells also are present in the neocortex. We propose that grid cells exist throughout the neocortex, in every region and in every cortical column, and that they define a location-based framework for how the neocortex works. The same grid cell-based mechanism used in the older part of the brain to learn the structure of environments is used by the neocortex to learn the structure of objects, not only what they are, but also how they behave. Read More

#human

@TomerUllman: I had an AI (GPT3) generate 10 “thought experiments” (based on classic ones as input), and asked @WhiteBoardG to sketch them.

Image
Read More
#image-recognition, #nlp