How it all started to where we are now
A perceptron is a convenient artificial model of a biological neuron. It is a single-layer neural network algorithm used for supervised learning. It consists of input values, weights and bias, and an activation function.
Artificial intelligence (AI) is the new electricity. It has become the talk of the town. Fancy words like machine learning (ML) and deep learning (DL) are now a mandatory part of every product or solution offered by the corporate world to galvanize their clients and end-users. … ML is here to stay for a while and if you are a developer looking to upskill your portfolio, I suggest you start learning. Read More
Monthly Archives: October 2020
Top 8 Machine Learning Tools For Cybersecurity
In the present scenario, techniques like AI and machine learning are involved in almost all sectors. These techniques help organisations by various means, starting from getting insights from raw data to predicting future outcomes, and more.
Focussing all the benefits of AI and ML, the utilisation of machine learning techniques in cybersecurity has been started only a few years ago and still at a niche stage. AI in cybersecurity can help in various ways, such as identifying malicious codes, self-training and other such. Read More
Do We Live in a Simulation? Chances Are about 50–50
Gauging whether or not we dwell inside someone else’s computer may come down to advanced AI research—or measurements at the frontiers of cosmology
It is not often that a comedian gives an astrophysicist goose bumps when discussing the laws of physics. But comic Chuck Nice managed to do just that in a recent episode of the podcast StarTalk.The show’s host Neil deGrasse Tyson had just explained the simulation argument—the idea that we could be virtual beings living in a computer simulation. If so, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time—much like a video game optimized to render only the parts of a scene visible to a player. “Maybe that’s why we can’t travel faster than the speed of light, because if we could, we’d be able to get to another galaxy,” said Nice, the show’s co-host, prompting Tyson to gleefully interrupt. “Before they can program it,” the astrophysicist said,delighting at the thought. “So the programmer put in that limit.” Read More
Neil deGrasse Tyson Explains the Simulation Hypothesis
Heroes of NLP
Inventing Virtual Meetings of Tomorrow with NVIDIA AI Research
NVIDIA Maxine is a fully accelerated platform SDK for developers of video conferencing services to build and deploy AI-powered features that use state-of-the-art models in their cloud. Video conferencing applications based on Maxine can reduce video bandwidth usage down to one-tenth of H.264 using AI video compression, dramatically reducing costs. Read More
#nvidia, #videos, #image-recognitionTraining Generative Adversarial Networks with Limited Data
Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42. Read More
NSCAI Interim Report and Third Quarter Recommendations (October 2020)
Research remains the foundation of America’s technological leadership, and the government must make the investments to solidify this foundation for artificial intelligence (AI). In the First Quarter (Q1), the Commission recommended doubling non-defense AI R&D funding, focusing investments on six priority research areas, and launching a pilot of a National AI Research Resource. In the Second Quarter (Q2), the Commission examined the Department of Defense (DoD) research enterprise and recommended ways to overcome bureaucratic and resource constraints to accelerate national security-focused AI R&D. Read More
#dod, #icUnderstanding Transformers, the Data Science Way
Transformers have become the defacto standard for NLP tasks nowadays.
While the Transformer architecture was introduced with NLP, they are now being used in Computer Vision and to generate music as well. I am sure you would all have heard about the GPT3 Transformer and its applications thereof. Read More
Real Time Machine Learning at Scale using SpaCy, Kafka & Seldon Core
The Next Generation Of Artificial Intelligence
The field of artificial intelligence moves fast. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless.
If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today. Read More (Part 1) … (Part 2)