A solution to the learning dilemma for recurrent networks of spiking neurons

Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method–called e-prop–approaches the performance of backpropagation
through time (BPTT), the best-known method for training recurrent neural
networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence. Read More

#performance, #recurrent-neural-networks

Visualizing Vectors — Jed Crosby, Head of Data Science at Clari

Read More

#data-science, #videos

China aims to dominate the biggest technologies in our lives

Generation China is a CNET series about how the country is staking out positions in big areas of tech, from 5G to social media, with players like Huawei and TikTok.

An economic powerhouse pummeled by the coronavirus. A key trade partner. A competitive threat. An authoritarian government willing to censor its citizens and violate their human rights. China bears many labels depending on who — or when — you ask. But one thing that’s clear is the world’s most populous country and the second-largest economy has been steadily developing into a technological powerhouse that has the potential to upend the status quo.  Read More

#china-vs-us

Machine Learning for a Better Developer Experience

Imagine having to go through 2.5GB of log entries from a failed software build — 3 million lines — to search for a bug or a regression that happened on line 1M. It’s probably not even doable manually! However, one smart approach to make it tractable might be to diff the lines against a recent successful build, with the hope that the bug produces unusual lines in the logs.

Standard md5 diff would run quickly but still produce at least hundreds of thousands candidate lines to look through because it surfaces character-level differences between lines. Fuzzy diffing using k-nearest neighbors clustering from machine learning (the kind of thing logreduce does) produces around 40,000 candidate lines but takes an hour to complete. Our solution produces 20,000 candidate lines in 20 min of computing — and thanks to the magic of open source, it’s only about a hundred lines of Python code. Read More

#devops

Could this software help users trust machine learning decisions?

New software developed by BAE Systems could help the Department of Defense build confidence in decisions and intelligence produced by machine learning algorithms, the company claims.

BAE Systems said it recently delivered its new MindfuL software program to the Defense Advanced Research Projects Agency in a July 14 announcement. Developed in collaboration with the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory, the software is designed to increase transparency in machine learning systems—artificial intelligence algorithms that learn and change over time as they are fed ever more data—by auditing them to provide insights about how it reached its decisions. Read More

#dod, #explainability

How an AI graphic designer convinced clients it was human

Nikolay Ironov had been working as a graphic designer for more than a year before he revealed his secret.

As an employee of Art. Lebedev Studio — Russia’s largest design company — Ironov had already worked on more than 20 commercial projects, creating everything from beer bottle labels to startup logos.

But Ironov was not the person he claimed to be. In fact, the designer was not a person at all. Read More

#image-recognition, #nlp, #vfx

Traditional vs Deep Learning Algorithms used in BlockChain in Retail Industry

This blog highlights different ML algorithms used in blockchain transactions with a special emphasis on bitcoins in retail payments. This blog is structured as follows:

— Overview of the role of blockchain in the retail industry.
— Different traditional (SecureSVMBagging, BoostingClustering) vs deep learning algorithms (LSTM, CNN, and GAN) used in bitcoin retail payments.

Read More

#blockchain, #deep-learning, #iot

Data Science, Quarantined

Companies are beginning to reboot their machine learning and analytics, which have been disrupted by the global pandemic.

The economic impact of COVID-19 is unprecedented, dramatically changing markets and prospects for economic growth. Supply chains, transportation, food processing, retail, e-commerce, and many other industries have transformed overnight. Unemployment in the U.S. has reached levels unknown in recent memory, and GDP is expected to fall around the world. As one economic journalist summed up the situation: “Nearly everything in the world is super-weird and disrupted right now.” Read More

#data-science

We Have Already Let The Genie Out of The Bottle

How will we make sure that Artificial Intelligence won’t run amok and will be a force for good?

There are many areas where governance frameworks and international agreements about the use of artificial intelligence (AI) are needed. For example, there is an urgent need for internationally shared rules governing autonomous weapons and the use of facial recognition to target minorities and suppress dissent. Eliminating bias in algorithms for criminal sentencing, credit allocation, social media curation and many other areas should be an essential focus for both research and the spread of best practices. Read More

#artificial-intelligence, #singularity, #bias

OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless

“Playing with GPT-3 feels like seeing the future,” Arram Sabeti, a San Francisco-based developer and artist, tweeted last week. That pretty much sums up the response on social media in the last few days to OpenAI’s latest language-generating AI. Read More

#nlp