Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal’s spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks,dubbed sinusoidal representation networks or SIRENs, are ideally suited for representing complex natural signals and their derivatives. We analyze SIREN activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENs can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine SIRENs with hypernetworks to learn priors over the space of SIRENf unctions. Please see theproject website for a video overview of the proposed method and all applications. Read More
Daily Archives: July 23, 2020
Deepfakes Are Becoming the Hot New Corporate Training Tool
This month, advertising giant WPP will send unusual corporate training videos to tens of thousands of employees worldwide. A presenter will speak in the recipient’s language and address them by name, while explaining some basic concepts in artificial intelligence. The videos themselves will be powerful demonstrations of what AI can do: The face, and the words it speaks, will be synthesized by software.
WPP doesn’t bill them as such, but its synthetic training videos might be called deepfakes, a loose term applied to images or videos generated using AI that look real. Read More
Ingestion of ethanol just prior to sleep onset impairs memory for procedural but not declarative tasks
Study objectives: The aim of Experiment 1 was to determine if moderate ethanol consumption at bedtime would result in memory loss for recently learned cognitive procedural and declarative tasks. The aim of Experiment 2 was to establish that the memory loss due to alcohol consumption at bedtime was due to the effect of alcohol on sleep states. Read More
A solution to the learning dilemma for recurrent networks of spiking neurons
Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method–called e-prop–approaches the performance of backpropagation
through time (BPTT), the best-known method for training recurrent neural
networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence. Read More