Reshaping Business With Artificial Intelligence

Expectations for artificial intelligence (AI) are sky-high, but what are businesses actually doing now? The goal of this report is to present a realistic baseline that allows companies to compare their AI ambitions and efforts. Building on data rather than conjecture, the research is based on a global survey of more than 3,000 executives, managers, and analysts across industries and in-depth interviews with more than 30 technology experts and executives. (See “About the Research,” page 2.)

The gap between ambition and execution is large at most companies. Three-quarters of executives believe AI will enable their companies to move into new businesses. Almost 85% believe AI will allow their companies to obtain or sustain a competitive advantage. But only about one in five companies has incorporated AI in some offerings or processes. Only one in 20 companies has extensively incorporated AI in offerings or processes. Less than 39% of all companies have an AI strategy in place. The largest companies — those with at least 100,000 employees — are the most likely to have an AI strategy, but only half have one.

Our research reveals large gaps between today’s leaders — companies that already understand and have adopted AI — and laggards. One sizeable difference is their approach to data. AI algorithms are not natively “intelligent.” They learn inductively by analyzing data. While most leaders are investing in AI talent and have built robust information infrastructures, other companies lack analytics expertise and easy access to their data. Our research surfaced several misunderstandings about the resources needed to train AI. The leaders not only have a much deeper appreciation about what’s required to produce AI than laggards, they are also more likely to have senior leadership support and have developed a business case for AI initiatives. Read More

#strategy

AI systems should be accountable, explainable, and unbiased, says EU

The EU convened a group of 52 experts who came up with seven requirements they think future AI systems should meet. They are as follows:

Human agency and oversight — AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes.

Technical robustness and safety — AI should be secure and accurate. It shouldn’t be easily compromised by external attacks (such as adversarial examples), and it should be reasonably reliable.

Privacy and data governance — Personal data collected by AI systems should be secure and private. It shouldn’t be accessible to just anyone, and it shouldn’t be easily stolen.

Transparency — Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make.

Diversity, non-discrimination, and fairness — Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines.

Environmental and societal well-being — AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”

Accountability — AI systems should be auditable and covered by existing protections for corporate whistleblowers. Negative impacts of systems should be acknowledged and reported in advance. Read More

#ethics