Artificial intelligence is continuously evolving and propagating across every industry. With much of the groundbreaking innovations moving the industry forward, the technology is continuously making headlines every day. AI refers to software or systems that perform intelligent tasks like those of human brains such as learning, reasoning, and judgment. Its applications range from automation and translation systems for natural languages that people use daily, to image recognition systems that help identify faces and letters from images. Today, AI is used in different forms include digital assistants, chatbots and machine learning, among others.
Here’s a look at the top 10 AI Research Labs in the world that are leading the research and development in AI and related technologies. Read More
Monthly Archives: November 2020
Adversarial Examples in Deep Learning — A Primer
Introducing adversarial examples in deep learning vision models
We have seen the advent of state-of-the-art (SOTA) deep learning models for computer vision ever since we started getting bigger and better compute (GPUs and TPUs), more data (ImageNet etc.) and easy to use open-source software and tools (TensorFlow and PyTorch). Every year (and now every few months!) we see the next SOTA deep learning model dethrone the previous model in terms of Top-k accuracy for benchmark datasets. The following figure depicts some of the latest SOTA deep learning vision models (and doesn’t depict some like Google’s BigTransfer!). Read More
Introducing software fuzzing – part of AI and ML in DevOps
The lines between the real world and the digital world have been consistently blurring for years, and with that, software has bloomed. Physicists are hypothesizing that information can be considered a form of matter, the fifth form of matter in fact.
More and more, software is linked to the quality of our lives. That means the quality of our software will fundamentally direct the quality of our experience, so there’s never been a more important time to seek out ways to improve our DevOps. One of the tools that helps us explore that is ML. Read More
The Future of AI is Artificial Sentience
How do you *feel* about that?
Much of today’s discussion around the future of artificial intelligence is focused on the possibility of achieving artificial general intelligence. Essentially, an AI capable of tackling an array of random tasks and working out how to tackle a new task on its own, much like a human, is the ultimate goal. But the discussion around this kind of intelligence seems less about if and more about when at this stage in the game. With the advent of neural networks and deep learning, the sky is the actual limit, at least that will be true once other areas of technology overcome their remaining obstructions. Read More
21 amazing Youtube channels for you to learn AI, Machine Learning, and Data Science for free
This is the perfect moment to start learning something new, and why not start with AI?
I know the pandemic is keeping everyone at home, home working is becoming the new normal for many of us, and it is hard to find good presential training these days, but it does not mean that you need to stop learning!
I would say that this is the perfect moment to start learning something new, and why not start with Data Science? Read More
It’s Hard For Neural Networks to Learn the Game of Life
Efforts to improve the learning abilities of neural networks have focused mostly on the role of optimization methods rather than on weight initializations. Recent findings, however, suggest that neural networks rely on lucky random initial weights of subnetworks called “lottery tickets” that converge quickly to a solution [8].To investigate how weight initializations affect performance, we examine small convolutional networks that are trained to predict nsteps of the two-dimensional cellular automaton Conway’s Game of Life[3], the update rules of which can be implemented efficiently in a2n+ 1layer convolutional network. We find that networks of this architecture trained on this task rarely converge. Rather, networks require substantially more parameters to consistently converge. In addition, near-minimal architectures are sensitive to tiny changes in parameters: changing the sign of a single weight can cause the network to fail to learn. Finally, we observe a critical valued0such that training minimal networks with examples in whichc ells are alive with probabilityd0dramatically increases the chance of convergence to a solution. We conclude that training convolutional neural networks to learn the input/output function represented by nsteps of Game of Life exhibits many characteristics predicted by the lottery ticket hypothesis [8], namely, that the sizeof the networks required to learn this function are often significantly larger than the minimal network required to implement the function. Read More
The AI Company Helping the Pentagon Assess Disinfo Campaigns
In September, Azerbaijan and Armenia renewed fighting over Nagorno-Karabakh, a disputed territory in the Caucasus mountains. By then, an information warfare campaign over the region had been underway for several months.
The campaign was identified using artificial intelligence technology being developed for US Special Operations Command (SOCOM), which oversees US special forces operations.
The AI system, from Primer, a company focused on the intelligence industry, identified key themes in the information campaign by analyzing thousands of public news sources. In practice, Primer’s system can analyze classified information too. Read More
The AI Advantage: Is your business ready for artificial intelligence?
The news about artificial intelligence is mostly dominated by sensational stories such as the ominous threat of deepfakes, deep learning algorithms that create fake blogs, AI bots that create their own language, and generative adversarial networks that create realistic portraits of non-existent people.
But the practical use of AI algorithms is much farther behind than the hype caused by the media. Read More
AI pioneer Geoff Hinton: “Deep learning is going to be able to do everything”
The modern AI revolution began during an obscure research contest. It was 2012, the third year of the annual ImageNet competition, which challenged teams to build computer vision systems that would recognize 1,000 objects, from animals to landscapes to people.
In the first two years, the best teams had failed to reach even 75% accuracy. But in the third, a band of three researchers—a professor and his students—suddenly blew past this ceiling. They won the competition by a staggering 10.8 percentage points. That professor was Geoffrey Hinton, and the technique they used was called deep learning. Read More
Creating End-to-End MLOps pipelines using Azure ML and Azure Pipelines
In this 7-part series of posts we’ll be creating a minimal, repeatable MLOps Pipeline using Azure ML and Azure Pipelines.
The git repository that accompanies these posts can be found here. Read More