How Washington Boosted Beijing’s Quest for Tech Dominance
In 2016, AlphaGo, a computer program developed by machine learning experts in London, beat the world’s top players of the classical Chinese board game Go. It was a revolutionary breakthrough in artificial intelligence: AlphaGo had demonstrated an unprecedented capacity for intuition and pattern recognition. That a Western program had been the first to achieve this AI feat prompted some commentators to declare that China had experienced a “Sputnik moment,” an event that would trigger widespread unease in the country about its perceived technological lag. Indeed, China has had a Sputnik moment in recent years—but it wasn’t prompted by AlphaGo’s victory. Rather, since 2018, tightening U.S. trade restrictions have threatened the viability of some of China’s biggest firms, fueling anxiety in Beijing and forcing Chinese companies to reinvent the U.S. technologies they can no longer access.
The Chinese government has long had twin ambitions for industrial policy: to be more economically self-sufficient and to achieve technological greatness. For the most part, it has relied on government ministries and state-owned enterprises to pursue these goals, and for the most part, it has come up short. …
Then came U.S. President Donald Trump. By sanctioning entrepreneurial Chinese companies, he forced them to stop relying on U.S. technologies such as semiconductors. Now, most of them are trying to source domestic alternatives or design the necessary technologies themselves. In other words, Trump’s gambit accomplished what the Chinese government never could: aligning private companies’ incentives with the state’s goal of economic self-sufficiency. Read More
Monthly Archives: July 2021
Catholic priest quits after “anonymized” data revealed alleged use of Grindr
Location data is almost never anonymous.
In what appears to be a first, a public figure has been ousted after de-anonymized mobile phone location data was publicly reported, revealing sensitive and previously private details about his life.
Monsignor Jeffrey Burrill was general secretary of the US Conference of Catholic Bishops (USCCB), effectively the highest-ranking priest in the US who is not a bishop, before records of Grindr usage obtained from data brokers was correlated with his apartment, place of work, vacation home, family members’ addresses, and more. Grindr is a gay hookup app, and while apparently none of Burrill’s actions were illegal, any sort of sexual relationship is forbidden for clergy in the Catholic Church. The USCCB goes so far as to discourage Catholics from even attending gay weddings.
Burrill’s case is “hugely significant,” Alan Butler, executive director of the Electronic Information Privacy Center, told Ars. “It’s a clear and prominent example of the exact problem that folks in my world, privacy advocates and experts, have been screaming from the rooftops for years, which is that uniquely identifiable data is not anonymous.” Read More
Pretrained Transformers As Universal Computation Engines
We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning – in particular, without finetuning of the self-attention and feedforward layers of the residual blocks. We consider such a model, which we call a Frozen Pretrained Transformer (FPT), and study finetuning it on a variety of sequence classification tasks spanning numerical computation, vision, and protein fold prediction. In contrast to prior works which investigate finetuning on the same modality as the pretraining dataset, we show that pretraining on natural language can improve performance and compute efficiency on non-language downstream tasks. Additionally, we perform an analysis of the architecture, comparing the performance of a random initialized transformer to a random LSTM. Combining the two insights, we find language-pretrained transformers can obtain strong performance on a variety of non-language tasks. Read More
Cheat-maker brags of computer-vision auto-aim that works on “any game”
When it comes to the cat-and-mouse game of stopping cheaters in online games, anti-cheat efforts often rely in part on technology that ensures the wider system running the game itself isn’t compromised. On the PC, that can mean so-called “kernel-level drivers” which monitor system memory for modifications that could affect the game’s intended operation. On consoles, that can mean relying on system-level security that prevents unsigned code from being run at all (until and unless the system is effectively hacked, that is).
But there’s a growing category of cheating methods that can now effectively get around these forms of detection in many first-person shooters. By using external tools like capture cards and “emulated input” devices, along with machine learning-powered computer vision software running on a separate computer, these cheating engines totally circumvent the secure environments set up by PC and console game makers. This is forcing the developers behind these games to look to alternate methods to detect and stop these cheaters in their tracks. Read More
The 5th AI City Challenge
The AI City Challenge was created with two goals in mind: (1) pushing the boundaries of research and development in intelligent video analysis for smarter cities use cases, and (2) assessing tasks where the level of performance is enough to cause real-world adoption. Trans portation is a segment ripe for such adoption. The fifth AI City Challenge attracted 305 participating teams across 38 countries, who leveraged city-scale real traffic data and high-quality synthetic data to compete in five challenge tracks. Track 1 addressed video-based automatic vehicle counting, where the evaluation being conducted on both algorithmic effectiveness and computational efficiency. Track 2 addressed city-scale vehicle re-identification with augmented synthetic data to substantially increase the training set for the task. Track 3 addressed city-scale multitarget multi-camera vehicle tracking. Track 4 addressed traffic anomaly detection. Track 5 was a new track addressing vehicle retrieval using natural language descriptions. The evaluation system shows a general leader board of all submitted results, and a public leader board of results limited to the contest participation rules, where teams are not allowed to use external data in their work. The public leader board shows results more close to real-world situations where annotated data is limited. Results show the promise of AI in Smarter Transportation. State-of-the-art performance for some tasks shows that these technologies are ready for adoption in real-world systems. Read More
OSU Bipedal Robot First to Run 5K
Challenges of Creating Digital Twins in the Transition to Industry 4.0
An IoT device is a piece of hardware, typically a sensor, that transmits data from one place to another over the internet. Types of IoT devices include simple (often wireless) sensors, actuators, as well as more sophisticated computerized devices.
A digital twin (DT) is the software representation of a physical object. At a bare minimum, a DT must include the unique identifier of the physical object it represents. However, it only starts fulfilling its purpose once additional information — such as sensory information (position, temperature, humidity, etc.) and/or its actuation capabilities (turn lamp on/off, etc.) — is added. The DT will often include additional auxiliary data, such as the device’s firmware version, configuration, calibration, and setpoint data.
When it comes to actuation, we often talk about the DT as a “shadow” of its physical representation in order to highlight the fact that actuations are always transactional. For instance, the DT’s intent to change its device state (turn it off or on) requires a particular command to be sent to the device, which after successful completion of the actuation needs to be communicated back to the caller (the DT).
A digital twin is sometimes referred to as the representation of an IoT device, which is not exactly the same. Let us take a closer look at the two categories, as the difference between them is actually bigger than one might think. Read More
Lucasfilm hires YouTuber who used deepfake to improve ‘The Mandalorian’
Luke Skywalker’s CGI face in the character’s The Mandalorian cameo was met with a lot of criticism, and fans even tried to fix the scene with various tools and programs. One of those fans did so well, Lucasfilm has hired him to help it ensure its upcoming projects won’t feature underwhelming de-aging and facial visual effects. That fan is a YouTuber known as Shamook, who uses deepfake technology to improve upon bad CG effects and to put actors in shows and movies they never starred in. Read More
An AI Road Map Starts With Data
Several years into what many people expected to be an AI revolution, there is a nagging sense that we are at a crossroads. Artificial intelligence is an evolutionary step forward for business optimization strategies — and rightly so — but the companies that saw AI as the path to the promised land could be forgiven for thinking that the hype has outweighed successful implementation.
Granted, there are numerous organizations that have integrated AI into their business processes, and it is already a routine part of software development, cybersecurity, natural language processing and robotic process automation (RPA).
And yes, making AI a priority in terms of scalability and an accelerated time to market has shown a modicum of success. Two years ago, for example, AI adoption rates reportedly grew by 270% from 2015 to 2018, according to Gartner, and some observers enthusiastically predicted that a brave new world was already here.
But adoption doesn’t equal success. Read More
EvilModel: Hiding Malware Inside of Neural Network Models
Delivering malware covertly and detection-evadingly is critical to advanced malware campaigns. In this paper, we present a method that delivers malware covertly and detection evadingly through neural network models. Neural network models are poorly explainable and have a good generalization ability. By embedding malware into the neurons, malware can be delivered covertly with minor or even no impact on the performance of neural networks. Meanwhile, since the structure of the neural network models remains unchanged, they can pass the security scan of antivirus engines. Experiments show that 36.9MB of malware can be embedded into a 178MB-AlexNet model within 1% accuracy loss, and no suspicious are raised by antivirus engines in VirusTotal, which verifies the feasibility of this method. With the widespread application of artificial intelligence, utilizing neural networks becomes a forwarding trend of malware. We hope this work could provide a referenceable scenario for the defense on neural network-assisted attacks. Read More
#adversarial, #cyber