How hackers are weaponizing artificial intelligence

From OCR to self-learning malware, hackers are now leaning on AI to bypass security systems.

  • Cybercrime is a lucrative activity and one that’s getting easier to enter
  • Threats are becoming more widespread and sophisticated, attackers are increasingly leaning on AI to bypass security systems

In an age where everything is becoming connected and data is regarded as a business’s most valuable commodity, cybersecurity continues to diversify in a hyper-competitive marketplace.  Read More

#cyber

Consequences of mistaking models for software

Twelve traps to avoid when building and deploying models

In Part 1 of this series on data scientists are from Mars and software engineers are from Venus we examined the five key dimensions of difference between software and models. The natural follow on question to ask is — So What? Does it really matter if models are conflated with software and data scientists are treated as software engineers? After all for a large cross-section of the population, and more importantly the business world, the similarities between them are far more visible than their differences. In fact, Andrej Karpathy refers to this new way of solving problems using models as Software 2.0. If they are really the next iteration of software are these differences really consequential. Read More

#data-science, #devops

Data Scientists are from Mars and Software Developers are from Venus (Part 1)

Mars and Venus are very different planets. Mars’s atmosphere is very thin and it can get very cold; while Venus’s atmosphere is very thick and it can get very hot — hot enough to melt lead!.

…Software Engineers and Data Scientists come from two different worlds — one from Venus and the other from Mars. They have different backgrounds mindsets, and deal with different sets of issues. They have a number of things in common too. In this and subsequent blogs we will look at the key differences (and similarities) between them and why those differences exist and what kind of bridge we need to create between them. In this blog, we explore the fundamental differences between software and models. Read More

#data-science, #devops

Mapping U.S.–China Technology Decoupling

How disparate policies are unraveling a complex ecosystem

Over the past two decades, U.S. and Chinese technological trajectories have been closely linked. Internet protocols, hardware design and manufacturing, software development and deployment, and services and standards have to varying degrees been cross-border phenomena, with China and the United States two of the world’s most consequential and integrated countries.

The last few years, however, have seen a rise in mutual suspicion and moves—both direct and indirect—to unwind this extraordinary level of technological interdependence. The overall effect is an increasing degree of separation between the two ecosystems, a process widely known as decoupling. Read More

#china-vs-us

How AI will automate cybersecurity in the post-COVID world

By now, it is obvious to everyone that widespread remote working is accelerating the trend of digitization in society that has been happening for decades.

What takes longer for most people to identify are the derivative trends. One such trend is that increased reliance on online applications means that cybercrime is becoming even more lucrative. For many years now, online theft has vastly outstripped physical bank robberiesWillie Sutton said he robbed banks “because that’s where the money is.” If he applied that maxim even 10 years ago, he would definitely have become a cybercriminal, targeting the websites of banks, federal agencies, airlines, and retailers. According to the 2020 Verizon Data Breach Investigations Report, 86% of all data breaches were financially motivated. Today, with so much of society’s operations being online, cybercrime is the most common type of crime. Read More

#cyber

A Very Simple Introduction to Deep Learning on Amazon Sagemaker

Here is a very easy way to get started with deep learning in the cloud!

In this article, I will walk you through loading your data to S3 and then spinning up a Jupyter notebook instance on Amazon Sagemaker for running deep learning jobs. Read More

#mlaas, #python

TSMC and Graphcore Prepare for AI Acceleration on 3nm

One of the side announcements made during TSMC’s Technology Symposium was that it already has customers on hand with product development progressing for its future 3nm process node technology. As we’ve reported on previously, TSMC is developing its 3nm for risk production next year, and high volume manufacturing in the second half of 2022, so at this time TSMC’s lead partners are already developing their future silicon on the initial versions of the 3nm PDKs.

One company highlighted during TSMC’s presentations was Graphcore. Graphcore is an AI silicon company that makes the IPU, an ‘Intelligence Processing Unit’, to accelerate ‘machine intelligence’. It recently announced its second generation Colossus Mk2 IPU, built on TSMC’s N7 manufacturing process, and featuring 59.2 billion transistors. The Mk2 has an effective core count of 1472 cores, that can run ~9000 threads for 250 Teraflops of FP16 AI training workloads. The company puts four of these chips together in a single 1U to enable 1 Petaflop, along with 450 GB of memory and a custom low-latency fabric design between the IPUs. Read More

#mlperf, #nvidia

The fourth generation of AI is here, and it’s called ‘Artificial Intuition’

Artificial Intelligence (AI) is one of the most powerful technologies ever developed, but it’s not nearly as new as you might think. In fact, it’s undergone several evolutions since its inception in the 1950s. The first generation of AI was ‘descriptive analytics,’ which answers the question, “What happened?” The second, ‘diagnostic analytics,’ addresses, “Why did it happen?” The third and current generation is ‘predictive analytics,’ which answers the question, “Based on what has already happened, what could happen in the future?” Read More

#human

Responsible AI Can Effectively Deploy Human-Centered Machine Learning Models

Artificial intelligence (AI) is developing quickly as an unbelievably amazing innovation with apparently limitless application. It has shown its capacity to automate routine tasks, for example, our everyday drive, while likewise augmenting human capacity with new insight. Consolidating human imagination and creativity with the adaptability of machine learning is propelling our insight base and comprehension at a remarkable pace.

However, with extraordinary power comes great responsibility. In particular, AI raises worries on numerous fronts because of its possibly disruptive effect. These apprehensions incorporate workforce uprooting, loss of protection, potential biases in decision-making and lack of control over automated systems and robots. While these issues are noteworthy, they are likewise addressable with the correct planning, oversight, and governance. Read More

#augmented-intelligence, #devops

Quantum leap for speed limit bounds

Physicists set far-more-accurate limits on speed of quantum information.

Nature’s speed limits aren’t posted on road signs, but Rice University physicists have discovered a new way to deduce them that is better — infinitely better, in some cases — than previous methods. Read More

#quantum