AI Strategies – Incremental and Fundamental Improvements

Before starting to develop an AI strategy, make sure your team understands the limits of what is reasonable today, as well as incremental improvements that might be overlooked.  Focus should be on your LOB leaders who understand the business.  Make sure they are also able to recognize AI opportunities.

I recently saw this chart from PricewaterhouseCoopers as part of their ‘2018 AI Predictions Report’.  The statement was: ‘we effectively utilize all the data we capture to drive business value’ (strongly agree – ~2200 C-level and IT respondents in large and mid-size companies internationally).

Keep in mind that the survey was done with future applications of AI as its focus.  What immediately struck me is that these responses are almost exactly the inverse of what we should expect.  Read More

#strategy

A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex

How the neocortex works is a mystery. In this paper we propose a novel framework for understanding its function. Grid cells are neurons in the entorhinal cortex that represent the location of an animal in its environment. Recent evidence suggests that grid cell-like neurons may also be present in the neocortex. We propose that grid cells exist throughout the neocortex, in every region and in every cortical column. They define a location-based framework for how the neocortex functions. Whereas grid cells in the entorhinal cortex represent the location of one thing, the body relative to its environment, we propose that cortical grid cells simultaneously represent the location of many things. Cortical columns in somatosensory cortex track the location of tactile features relative to the object being touched and cortical columns in visual cortex track the location of visual features relative to the object being viewed. We propose that mechanisms in the entorhinal cortex and hippocampus that evolved for learning the structure of environments are now used by the neocortex to learn the structure of objects. Having a representation of location in each cortical column suggests mechanisms for how the neocortex represents object compositionality and object behaviors. It leads to the hypothesis that every part of the neocortex learns complete models of objects and that there are many models of each object distributed throughout the neocortex. The similarity of circuitry observed in all cortical regions is strong evidence that even high-level cognitive tasks are learned and represented in a location-based framework. Read More

#human

Superconducting Optoelectronic Neurons V: Networks and Scaling

Networks of superconducting optoelectronic neurons are investigated for their near-term technological potential and long-term physical limitations. Networks with short average path length, high clustering coefficient, and power-law degree distribution are designed using a growth model that assigns connections between new and existing nodes based on spatial distance as well as degree of existing nodes. The network construction algorithm is scalable to arbitrary levels of network hierarchy and achieves systems with fractal spatial properties and efficient wiring. By modeling the physical size of superconducting optoelectronic neurons, we calculate the area of these networks. A system with 8100 neurons and 330,430 total synapses will fit on a 1 cm × 1 cm die. Systems of millions of neurons with hundreds of millions of synapses will fit on a 300 mm wafer. For multi-wafer assemblies, communication at light speed enables a neuronal pool the size of a large data center (105 m2 ) comprising 100 trillion neurons with coherent oscillations at 1 MHz. Assuming a power law frequency distribution, as is necessary for self-organized criticality, we calculate the power consumption of the networks. We find the use of single photons for communication and superconducting circuits for computation leads to power density low enough to be cooled by liquid 4He for networks of any scale. Read More

#human