AI and machine learning funds rose 6.4% in 2019, according to popular gauge. As shown in the article, these funds performed worse than random traders. We list a few possible reasons for these disappointing results.
According to Eurekahedge, AI and machine learning funds rose 6.4% in 2019 while the S&P 500 index rose 28.9% and 31.5% on a total return basis. Read More
Daily Archives: January 23, 2020
Quantum experiments explore power of light for communications, computing
A team from the Department of Energy’s Oak Ridge National Laboratory has conducted a series of experiments to gain a better understanding of quantum mechanics and pursue advances in quantum networking and quantum computing, which could lead to practical applications in cybersecurity and other areas. Read More
Whoever leads in artificial intelligence in 2030 will rule the world until 2100
A couple of years ago, Vladimir Putin warned Russians that the country that led in technologies using artificial intelligence will dominate the globe. He was right to be worried. Russia is now a minor player, and the race seems now to be mainly between the United States and China. But don’t count out the European Union just yet; the EU is still a fifth of the world economy, and it has underappreciated strengths. Technological leadership will require big digital investments, rapid business process innovation, and efficient tax and transfer systems. China appears to have the edge in the first, the U.S. in the second, and Western Europe in the third. One out of three won’t do, and even two out three will not be enough; whoever does all three best will dominate the rest. Read More
Putting An End to End-to-End:Gradient-Isolated Learning of Representations
We propose a novel deep learning method for local self-supervised representation learning that does not require labels nor end-to-end backpropagation but exploits the natural order in data instead. Inspired by the observation that biological neural networks appear to learn without backpropagating a global error signal, we split a deep neural network into a stack of gradient-isolated modules. Each module is trained to maximally preserve the information of its inputs using the InfoNCE bound from Oord et al. [2018]. Despite this greedy training, we demonstrate that each module improves upon the output of its predecessor, and that the representations created by the top module yield highly competitive results on downstream classification tasks in the audio and visual domain. The proposal enables optimizing modules asynchronously, allowing large-scale distributed training of very deep neural networks on unlabelled datasets. Read More