The spring of 2017 may be remembered as the coming-out party for Big Tech’s campaign to get inside your head. That was when news broke of Elon Musk’s new brain-interface company, Neuralink, which is working on how to stitch thousands of electrodes into people’s brains. Days later, Facebook joined the quest when it announced that its secretive skunkworks, named Building 8, was attempting to build a headset or headband that would allow people to send text messages by thinking—tapping them out at 100 words per minute.
The company’s goal was a hands-free interface anyone could use in virtual reality. “What if you could type directly from your brain?” asked Regina Dugan, a former DARPA officer who was then head of the Building 8 hardware dvision. “It sounds impossible, but it’s closer than you realize.”
Now the answer is in—and it’s not close at all. Four years after announcing a “crazy amazing” project to build a “silent speech” interface using optical technology to read thoughts, Facebook is shelving the project, saying consumer brain-reading still remains very far off. Read More
Monthly Archives: July 2021
The FTC Forced a Misbehaving A.I. Company to Delete Its Algorithm
In 2019, an investigation by NBC News revealed that photo storage app Ever had quietly siphoned billions of its users’ photos to train facial recognition algorithms.
Pictures of people’s friends and families, which they had thought were private, were in fact being used to train algorithms that Ever then sold to law enforcement and the U.S. military.
Two years later, the Federal Trade Commission has now made an example of parent company Everalbum, which has since rebranded to be named Paravision. In a decision posted January 11, Paravision will be required to delete all the photos it had secretly taken from users, as well as any algorithms it built using that data. Read More
Attackers can elicit ‘toxic behavior’ from AI translation systems, study finds
Neural machine translation (NMT), or AI that can translate between languages, is in widespread use today, owing to its robustness and versatility. But NMT systems can be manipulated if provided prompts containing certain words, phrases, or alphanumeric symbols. For example, in 2015 Google had to fix a bug that caused Google Translate to offer homophobic slurs like “poof” and “queen” to those translating the word “gay” from English into Spanish, French, or Portuguese. In another glitch, Reddit users discovered that typing repeated words like “dog” into Translate and asking the system for a translation to English yielded “doomsday predictions.”
A new study from researchers at the University of Melbourne, Facebook, Twitter, and Amazon suggests NMT systems are even more vulnerable than previously believed. By focusing on a process called back-translation, an attacker could elicit “toxic behavior” from a system by inserting only a few words or sentences into the dataset used to train the underlying model, the coauthors found. Read More
Singularity Street
Introducing Singularity Street. Follow the exploits of Robota Xi as he navigates the world of the future. Read More

SecDef Austin Speaks at AI Technology Summit
Secretary of Defense Lloyd J. Austin III delivered remarks at the National Security Commission on Artificial Intelligence’s 2021 Global Emerging Technology Summit.
Austin discusses some of the changes that he sees coming to the Department of Defense with respect to artificial intelligence, and the way they represent changes to some old ways of thinking. Read More
AI ethicist Kate Darling: ‘Robots can be our partners’
The MIT researcher says that for humans to flourish we must move beyond thinking of robots as potential future competitors.
Dr Kate Darling is a research specialist in human-robot interaction, robot ethics and intellectual property theory and policy at the Massachusetts Institute of Technology (MIT) Media Lab. In her new book, The New Breed, she argues that we would be better prepared for the future if we started thinking about robots and artificial intelligence (AI) like animals. Read More
Inside Facebook’s Data Wars
Executives at the social network have clashed over CrowdTangle, a Facebook-owned data tool that revealed users’ high engagement levels with right-wing media sources.
One day in April, the people behind CrowdTangle, a data analytics tool owned by Facebook, learned that transparency had limits.
Brandon Silverman, CrowdTangle’s co-founder and chief executive, assembled dozens of employees on a video call to tell them that they were being broken up. CrowdTangle, which had been running quasi-independently inside Facebook since being acquired in 2016, was being moved under the social network’s integrity team, the group trying to rid the platform of misinformation and hate speech. Some CrowdTangle employees were being reassigned to other divisions, and Mr. Silverman would no longer be managing the team day to day.
The announcement, which left CrowdTangle’s employees in stunned silence, was the result of a yearlong battle among Facebook executives over data transparency, and how much the social network should reveal about its inner workings. Read More
Alien Dreams: An Emerging Art Scene
In recent months there has been a bit of an explosion in the AI generated art scene.
Ever since OpenAI released the weights and code for their CLIP model, various hackers, artists, researchers, and deep learning enthusiasts have figured out how to utilize CLIP as a an effective “natural language steering wheel” for various generative models, allowing artists to create all sorts of interesting visual art merely by inputting some text – a caption, a poem, a lyric, a word – to one of these models.
For instance inputting “a cityscape at night” produces this cool, abstract-looking depiction of some city lights. Read More
Zero-Shot Detection via Vision and Language Knowledge Distillation
Zero-shot image classification has made promising progress by training the aligned image and text encoders. The goal of this work is to advance zero-shot object detection, which aims to detect novel objects without bounding box nor mask annotations. We propose ViLD, a training method via Vision and Language knowledge Distillation. We distill the knowledge from a pre-trained zero-shot image classification model (e.g., CLIP [33]) into a two-stage detector (e.g., Mask R-CNN [17]). Our method aligns the region embeddings in the detector to the text and image embeddings inferred by the pre-trained model. We use the text embeddings as the detection classifier, obtained by feeding category names into the pre-trained text encoder. We then minimize the distance between the region embeddings and image embeddings, obtained by feeding region proposals into the pre-trained image encoder. During inference, we include text embeddings of novel categories into the detection classifier for zero-shot detection. We benchmark the performance on LVIS dataset [15] by holding out all rare categories as novel categories. ViLD obtains 16.1 mask APr with a Mask R-CNN (ResNet-50 FPN) for zero-shot detection, outperforming the supervised counterpart by 3.8. The model can directly transfer to other datasets, achieving 72.2 AP50, 36.6 AP and 11.8 AP on PASCAL VOC, COCO and Objects365, respectively. Read More
#image-recognition, #nlp, #gansHolly Herndon’s AI Deepfake “Twin” Holly+ Transforms Any Song Into a Holly Herndon Song
“Vocal deepfakes are here to stay. A balance needs to be found between protecting artists, and encouraging people to experiment with a new and exciting technology.”
Holly Herndon, a prominent voice on the cross-section between AI and the music industry who has prominently used AI in her music, has released a new voice instrument: her AI deepfake “twin,” Holly+. It’s a website where you can upload any polyphonic audio and have it transformed into a download of music sung in Herndon’s voice. Give it a try here and read more details on how it works here. Read More