The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.
The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth. MIT Technology Review got an exclusive preview of the research, which has been submitted for peer review at computer security conference Usenix. — Read More
Recent Updates Page 153
Improving Wikipedia verifiability with AI
Verifiability is a core content policy of Wikipedia: claims need to be backed by citations. Maintaining and improving the quality of Wikipedia references is an important challenge and there is a pressing need for better tools to assist humans in this effort. We show that the process of improving references can be tackled with the help of artificial intelligence (AI) powered by an information retrieval system and a language model. This neural-network-based system, which we call SIDE, can identify Wikipedia citations that are unlikely to support their claims, and subsequently recommend better ones from the web. We train this model on existing Wikipedia references, therefore learning from the contributions and combined wisdom of thousands of Wikipedia editors. Using crowdsourcing, we observe that for the top 10% most likely citations to be tagged as unverifiable by our system, humans prefer our system’s suggested alternatives compared with the originally cited reference 70% of the time. To validate the applicability of our system, we built a demo to engage with the English-speaking Wikipedia community and find that SIDE’s first citation recommendation is preferred twice as often as the existing Wikipedia citation for the same top 10% most likely unverifiable claims according to SIDE. Our results indicate that an AI-based system could be used, in tandem with humans, to improve the verifiability of Wikipedia. — Read More
Skynet in China, a nightmare Surveillance Network out of the Black Mirror episode and the Terminator Movie
In recent years, China has made significant strides in the development and implementation of a massive surveillance network known as “Skynet.” No kidding, it’s after Terminator’s Movie” This ambitious project aims to enhance public security and control through the widespread use of advanced technologies such as artificial intelligence (AI), facial recognition, and big data analysis. — Read More
Strange Ways AI Disrupts Business Models, What’s Next For Creativity & Marketing, Some Provocative Data
This edition explores forecasts and implications around: (1) business models likely to become antiquated as AI proliferates in more industries, (2) reflections on another round of AI launches in the creative world, and (3) some provocative data and surprises at the end, as always.
If you’re new, here’s the rundown on what to expect. This ~monthly analysis is written for founders + investors I work with, colleagues, and a small group of subscribers. I aim for quality, density, and provocation vs. frequency and trendiness. My goal is to ignite discussion and add some kindling to the fire of feedback and serendipitous dot connecting. — Read More
ScaleAI wants to be America’s AI arms dealer
Alexandr Wang grew up in the shadow of the Los Alamos National Laboratory — the birthplace of the nuclear bomb. Now, the 26-year-old CEO of artificial intelligence company ScaleAI intends to play a key role in the next major age of geopolitical conflict.
Scale, which was co-founded by Wang in 2016 to help other companies organize and label data to train AI algorithms, has been aggressively pitching itself as the company that will help the U.S. military in its existential battle with China, offering to help the Pentagon pull better insights out of the reams of information it generates every day, build better autonomous vehicles and even create chatbots that can help advise military commanders during combat.
… In May, Scale became the first AI company to have a “large language model” — the tech behind chatbots such as ChatGPT — deployed on a classified network after it signed a deal with the Army’s XVIII Airborne Corps. Scale’s chatbot, known as “Donovan,” is meant to summarize intelligence and help commanders make decisions faster. — Read More
“Math is hard” — if you are an LLM – and why that matters
Some Reply Guy on X assured me yesteday that “transformers can multiply”. Even pointed me to a paper, allegedly offering proof.
The paper turns out to be pretty great, doing exactly the right test, but it doesn’t prove what its title alleges. More like the opposite.
The paper alleges “GPT Can Solve Mathematical Problems Without a Calculator.” But it doesn’t really show that, except in the sense that I can shoot free throws in the NBA, Sure, I can toss the ball in the air, and sometimes I might even sink a shot, the more so with practice; but I am probably going to miss a lot, too. And 70% would be great for free throws; for multiplication it sucks. 47323 * 19223 = 909690029 and it shall always be; no partial credit for coming close. — Read More
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
We investigate the potential implications of large language models (LLMs), such as Generative Pre-trained Transformers (GPTs), on the U.S. labor market, focusing on the increased capabilities arising from LLM-powered software compared to LLMs on their own. Using a new rubric, we assess occupations based on their alignment with LLM capabilities, integrating both human expertise and GPT-4 classifications. Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted. We do not make predictions about the development or adoption timeline of such LLMs. The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software. Significantly, these impacts are not restricted to industries with higher recent productivity growth. Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks. This finding implies that LLM-powered software will have a substantial effect on scaling the economic impacts of the underlying models. We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications. — Read More
‘Mind-blowing’ IBM chip speeds up AI
IBM’s NorthPole processor sidesteps need to access external memory, boosting computing power and saving energy.
A brain-inspired computer chip that could supercharge artificial intelligence (AI) by working faster with much less power has been developed by researchers at IBM in San Jose, California. Their massive NorthPole processor chip eliminates the need to frequently access external memory, and so performs tasks such as image recognition faster than existing architectures do — while consuming vastly less power.
“Its energy efficiency is just mind-blowing,” says Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay in Palaiseau. The work, published in Science1, shows that computing and memory can be integrated on a large scale, he says. “I feel the paper will shake the common thinking in computer architecture.” — Read More
A new chip architecture points to faster, more energy-efficient AI
We’re in the midst of a Cambrian explosion in AI. Over the last decade, AI has gone from theory and small tests to enterprise-scale use cases. But the hardware used to run AI systems, although increasingly powerful, was not designed with today’s AI in mind. As AI systems scale, the costs skyrocket. And Moore’s Law, the theory that the density of circuits in processors would double each year, has slowed.
But new research out of IBM Research’s lab in Almaden, California, nearly two decades in the making, has the potential to drastically shift how we can efficiently scale up powerful AI hardware systems. — Read More
Read the Paper
Meta’s Habitat 3.0 simulates real-world environments for intelligent AI robot training
Researchers from Meta Platforms Inc.’s Fundamental Artificial Intelligence Research team said today they’re releasing a more advanced version of the AI simulation environment Habitat, which is used to teach robots how to interact with the physical world.
Along with the launch of Habitat 3.0, the company announced the release of the Habitat Synthetic Scenes Dataset, an artist-authored 3D dataset that can be used to train AI navigation agents, as well as HomeRobot, an affordable robot assistant hardware and software platform for use in both simulated and real world environments.
In a blog post, FAIR researchers explained that the new releases represent its ongoing progress into they like to call “embodied AI.” By that, they mean AI agents that can perceive and interact with their environment, share that environment safely with human partners, and communicate and assist those human partners in both the digital and the physical world. — Read More