Two Air Force installations recently inked deals to use facial recognition technology to verify the identities of those coming on base — a move that can increase the physical distance during security checks as the coronavirus pandemic continues.
The Air Force awarded TrueFace phase two Small Business Innovation Research contracts to install its technology at Eglin Air Force Base and Joint Base McGuire-Dix-Lakehurst. The company calls its system “frictionless access control,” where security personnel do not need to be present for a check, adding that it can verify a face in one to two seconds. Read More
Monthly Archives: November 2020
Fighter aircraft will soon get AI pilots
But they will be wingmen, not squadron leaders
CLASSIC DOGFIGHTS, in which two pilots match wits and machines to shoot down their opponent with well-aimed gunfire, are a thing of the past. Guided missiles have seen to that, and the last recorded instance of such combat was 32 years ago, near the end of the Iran-Iraq war, when an Iranian F-4 Phantom took out an Iraqi Su-22 with its 20mm cannon.
But memory lingers, and dogfighting, even of the simulated sort in which the laws of physics are substituted by equations running inside a computer, is reckoned a good test of the aptitude of a pilot in training. And that is also true when the pilot in question is, itself, a computer program. So, when America’s Defence Advanced Research Projects Agency (DARPA), an adventurous arm of the Pentagon, considered the future of air-to-air combat and the role of artificial intelligence (AI) within that future, it began with basics that Manfred von Richthofen himself might have approved of. Read More
AI is wrestling with a replication crisis
Tech giants dominate research but the line between real breakthrough and product showcase can be fuzzy. Some scientists have had enough.
Last month Nature published a damning response written by 31 scientists to a study from Google Health that had appeared in the journal earlier this year. Google was describing successful trials of an AI that looked for signs of breast cancer in medical images. But according to its critics, the Google team provided so little information about its code and how it was tested that the study amounted to nothing more than a promotion of proprietary tech.
“We couldn’t take it anymore,” says Benjamin Haibe-Kains, the lead author of the response, who studies computational genomics at the University of Toronto. “It’s not about this study in particular—it’s a trend we’ve been witnessing for multiple years now that has started to really bother us.” Read More
System brings deep learning to “internet of things” devices
Advance could enable artificial intelligence on household appliances while enhancing data security and energy efficiency.
Deep learning is everywhere. This branch of artificial intelligence curates your social media and serves your Google search results. Soon, deep learning could also check your vitals or set your thermostat. MIT researchers have developed a system that could bring deep learning neural networks to new — and much smaller — places, like the tiny computer chips in wearable medical devices, household appliances, and the 250 billion other objects that constitute the “internet of things” (IoT).
The system, called MCUNet, designs compact neural networks that deliver unprecedented speed and accuracy for deep learning on IoT devices, despite limited memory and processing power. The technology could facilitate the expansion of the IoT universe while saving energy and improving data security. Read More
When it Comes to Data Transfer, 5G is Just the Beginning
f ever there was a technology tailor made for the world we currently live in, it’s 5G. Everything we do seems based on the need for speed and connectivity. High bandwidth and low latency enables hospital employees working in remote ICUs to communicate with, and quickly send information back to, their main campuses. 5G will also be invaluable in smart cities with densely packed networks of devices that need to communicate and share information in real-time. Then, there are the more everyday tasks that power our lives– a video conference here, a media streaming break there.
But while 5G has the potential to be the engine that moves all of the various bits and bytes around in these examples, what really happens with those bits and bytes? How do we take advantage of that 5G infrastructure? Read More
Defending against the cryptographic risk posed by quantum computing
The nation must address a significant future threat in the potential adversarial development and deployment of a quantum computer—a machine that extends the usual rules of computation via quantum physics. Such a deployment would potentially have grave impacts on the security of the United States and its citizens if the proper technical mitigations are not put in place. Now is the time to prepare—in four ways highlighted below—for the complex transition to post-quantum algorithms well before the advent of a quantum computer. Read More
CCNY team in quantum algorithm breakthrough
Researchers led by City College of New York physicist Pouyan Ghaemi report the development of a quantum algorithm with the potential to study a class of many-electron quantums system using quantum computers. Their paper, entitled “Creating and Manipulating a Laughlin-Type ν=1/3 Fractional Quantum Hall State on a Quantum Computer with Linear Depth Circuits,” appears in the December issue of PRX Quantum, a journal of the American Physical Society. Read More
Google’s SoundFilter AI separates any sound or voice from mixed-audio recordings
Researchers at Google claim to have developed a machine learning model that can separate a sound source from noisy, single-channel audio based on only a short sample of the target source. In a paper, they say their SoundFilter system can be tuned to filter arbitrary sound sources, even those it hasn’t seen during training.
The researchers believe a noise-eliminating system like SoundFilter could be used to create a range of useful technologies. For instance, Google drew on audio from thousands of its own meetings and YouTube videos to train the noise-canceling algorithm in Google Meet. Meanwhile, a team of Carnegie Mellon researchers created a “sound-action-vision” corpus to anticipate where objects will move when subjected to physical force. Read More
The AI-Powered Cybersecurity Arms Race and its Perils
The advancement in the field of artificial intelligence (AI) is still one of the most important technological achievements in recent history. The prominence and prevalence of machine learning and deep learning algorithms of all types, being able to unearth and infer valuable conclusions about the world surrounding us without being explicitly programmed to do so, has sparked both the imagination and primordial fears of the general public.
The cybersecurity industry is no exception. It seems that wherever you go, you can’t find a cybersecurity vendor that doesn’t rely, to some extent, on Natural Language Processing (NLP), computer vision, neural networks, or other technology strains of what could be broadly categorised or branded as ‘AI’. Read More
FPGAs could replace GPUs in many deep learning applications
The renewed interest in artificial intelligence in the past decade has been a boon for the graphics cards industry. Companies like Nvidia and AMD have seen a huge boost to their stock prices as their GPUs have proven to be very efficient for training and running deep learning models. Nvidia, in fact, has even pivoted from a pure GPU and gaming company to a provider of cloud GPU services and a competent AI research lab.
But GPUs also have inherent flaws that pose challenges in putting them to use in AI applications, according to Ludovic Larzul, CEO and co-founder of Mipsology, a company that specializes in machine learning software.
The solution, Larzul says, are field programmable gate arrays (FPGA), an area where his company specializes. Read More